Hidden software which can record every letter typed on a HP Laptop was recently discovered. The impact is far reaching, with many current HP Laptop models being impacted by the finding. In this post we will first investigate the key logging issue, and then discuss a better method for preventing this from happening in the future.
Security researcher Michael Myng found the keylogging code in software drivers that were preinstalled on HP laptops to make the keyboard work. He discovered the keylogger code during the inspection of the Synaptics Touchpad software, as he attempted to figure out how to control the keyboard backlight on the HP laptop. Mr Myng said, “The keylogger was disabled by default, but an attacker with access to the computer could have enabled it to record what a user was typing.”
According to HP, it was originally built into the Synaptics software to help debug errors. HP acknowledged that this debug software could exploited to result in a “loss of confidentiality” but it said neither Synaptics nor HP had access to customer data as a result of the flaw.
Software like the keylogging code are common practice, and are originally put in place to assist developers to debug software. This code can also be referred to as ‘debug code’. However when the code is shipped in the release software and a simple mechanism exists for switching it on, then it becomes a potential security risk. The debug code, once discovered, would normally be registered as a CWE (Common Weakness Enumeration). CWE is a universal online dictionary of weaknesses that have been found in computer software. The dictionary is maintained by the MITRE Corporation and can be accessed free on a worldwide basis.
However due to the ability for it to be exploitable in the field by some malicious software, it is more than likely to be registered as a CVE (Common Vulnerabilities and Exposures). CVE is a catalogue of known security threats. The catalogue is sponsored by the United States Department of Homeland Security (DHS), and threats are divided into two categories: vulnerabilities and exposures.
The unfortunate irony is that less than 7 months ago a similar exploit was identified in the HP Audio Driver (CVE-2017-8360). In this case the benign debug logging code in the audio driver could also be used to track key strokes.
Mr Myng, does a detailed analysis of the issue with the most recent keylogging code here, it’s worth a read for anyone interested in the approach and technical details related to identifying the issue.
Looking at the debug code shown in Figure 1, it can be seen that the code is the typical kind of debug code that almost every software program might use when testing or fault finding issues.
The use of logging techniques like this are an easy way to inspect what an application is doing. We discuss various techniques like this in a prior post ‘Illuminating System Integration‘. The critical issue here is that the code was left in the application after it shipped, hence leaving it vulnerable to someone who might want to exploit the system to steal login/password details. This brings us to the next question, is there a better and safer way to collect this type of debug information, without the risk of leaving potential exploits in the software after it is shipped?
Is there a better way?
One of the challenges with resolving defects in a fully integrated system is how to capture the data needed to understand the root cause of the problem. Using a debugger often changes the timing of the system in a way that masks the bug or prevents the system from running properly.
If we take a step back, lets take a look at the concept of code coverage. Code Coverage is the measure of code that has been executed, with an ideal quality goal to ensure that all lines of code are executed prior to shipping our software. How do we measure structural code coverage? The process is very simple, we take the code we want to measure code coverage on, we instrument it with markers that will log some data to show us where the application has executed. In Figure 2 we see an example of the original code, and the instrumented code side by side.
By using an Instrumenter to instrument the code for code coverage, whenever the original code is modified, we can rerun the Instrumenter and get a new version of the instrumented code. This also means, the original code NEVER needs to have any instrumentation logic inside it, as the Instrumenter can automatically recreate it. In this way it is also NEVER possible for instrumented code to be accidentally shipped.
This same approach and technology can help us to implement a safer logging solution, just like what HP attempted to do above. We can address this challenge using existing code instrumentation technology to make the process of adding trace code as simple as possible. There are two critical pieces we need to build a better logging solution:
- The ability to insert a block of code anywhere in our program and ensure it is syntactically correct for the region in which it is inserted
- Have the ability for the inserted logic to automatically ‘follow’ the region of code it has been inserted into, so that if the code changes, we do not need to go through the manual process of moving the debug code
We can call this inserted debug code a ‘Probe Point’. An example of this is shown using the tool VectorCAST/Probe™. In Figure 3, we can see a code editor that allows us to select the statement of code where we want to insert our trace logic.
Once we have identified where we want to insert the code, the tool (as shown in Figure 4) allows us to insert our Probe Point either above or below the statement.
Not only can we use this mechanism to check data values in our system, but by inserting our Probe Point before a statement, we can also use this mechanism to set or manipulate data values in our code. This capability allows us to trigger difficult to replicate behaviour, or even try out a potential bug fix before actually making a change to the original source code.
Finally, getting back to the original issue that HP now face, the Probe Points are inserted in the same automatic way that coverage instrumentation is done. So the original code NEVER needs to be modified with these debug hooks. Instead any time a change is made to the software, we rerun the Instrumenter, which in this case we can call a ‘Probe Instrumenter’ and recreate the Probe Point with the appropriate debug code.
In this post we have looked at what is considered a major security flaw in HP Laptops that has been brought about by the need for the developers of the software for the laptop to be able to debug it. The primary issue being that the debug software was shipped in the final release and left open to possible exploitation. This was the second time this year HP has run into an issue like this. To avoid this, we have proposed a better approach to inserting debug software into our code. The concept of Probe Points has many advantages:
- Dynamically instruments device software to isolate defects
- Can be inserted during Unit, Integration, or System Testing
- Captures internal data values
- Can be used to record detailed control flow
- Inject Faulty Values to test error handling
- Debug hard to trigger race conditions
Most critically, the revised approach provides a guarantee potentially exploitable debug software does NOT ship in the real application.