2024-10-29

Instructions on how to disable/tweak the Pixel Refresher on LG OLED CX

NB: This would have been posted in answer to this or this but unfortunately these posts are archived, so it's not possible to comment on them... And when I tried to create a topic in reddit's /r/OLED community, the moderators shadowremoved the post without providing any explanation. Way to create a welcoming community, guys...

PREAMBLE

Now, before people jump on "It's there for a reason - You should not disable the LG Pixel Refresh!" bandwagon, let me explain my reasons for doing so:

I have been the happy owner of an LG OLED65CX for the past four years, but, over the last few months, it has started to develop the dreaded TV not turning on on first try issue (1)(2) (which is absolutely maddening in terms of LG having completely let their customers down on an otherwise great product, through shoddy PSU hardware design - The Samsung LCD TV I had before that and which this LG OLED replaced, is now 17 years old but powering up and working fine for crying out loud!). Coincidentally, the power issue started to happen around the 2000 hours panel usage mark.

Which means that the TV can no longer properly turn on from standby after it has been off for a while.

Which means that the long form Pixel Refresher is unable to run altogether.

Which means that, every time I shut down or (eventually manage to) power up the TV, I get the reminders about the Pixel Refresher, which I cannot do anything about!

And of course, I have tried running the Pixel Refresher manually, but even a manual run still waits for complete standby before running it, which means it still has the same issue as trying to run through the scheduler, as the TV can simply no longer power itself up on first attempt.

So, yes, when you don't have any other choice, there do exist legitimate reasons why you may want to disable the Pixel Refresher.

And, yes, it is possible to accomplish just that (at least on OLED CX models, I obviously cannot vouch for any other model) provided that you have enabled root access.

DISCLAIMER

THE COMMANDS BELOW ARE FOR ADVANCED USERS ONLY AND ARE PROVIDED WITHOUT ANY WARRANTY OF IMPLIED FITNESS FOR A SPECIFIC PURPOSE. SHOULD YOU DO CHOOSE TO RUN ANY OF THESE COMMANDS, YOU ACCEPT THAT THERE EXISTS A RISK THAT THEY MAY RESULT IN HARDWARE DAMAGE AND/OR LOSS OF WARRANTY, AND AGREE THAT THE ENTIRE RESPONSIBILITY OF RUNNING SAID COMMANDS LIES ENTIRELY WITH YOU.

COMMANDS TO DISABLE/TWEAK THE LG PIXEL REFRESHER

Log on to your OLED TV as root through ssh/telnet and issue the command:

luna-send -d -n 1 -f "luna://com.webos.service.oledepl/getPixelRefresherInfoList" '{ "subscribe": false }'

This should return something like:

{
    "jbInterval": 2000,
    "returnValue": true,
    "jbLastTime": 0,
    "offrsInterval": 4,
    "offrsCount": 427,
    "offrsLastTime": 2030,
    "subscribed": false,
    "jbCount": 0,
    "pnwashKeyLock": true
}

If it doesn't return anything, STOP and don't proceed any further, as your model or firmware is using a different way of controlling the Pixel Refresher from what I am describing.

In the above, jb is the prefix for the "long form" Pixel refresher, that is scheduled every 2000 hours (jbInterval) and offrs is the "short form" Pixel Refresher that runs every 4 panel usage hours or so. Obviously, the one that is of interest to us, since it's the one that produces the popups, is the jb one.

At this stage, you have 2 ways to approach the issue. You can either increase the interval at which the long form Pixel Refresher is scheduled to run. For instance, you can set it to 4000 hours by issuing:

luna-send -d -n 1 -f "luna://com.webos.service.oledepl/setPixelRefresherInfoList" '{ "jbInterval": 4000,"subscribe": false }'

Or you can tell the system that the long form Pixel Refresher has run, by issuing:

luna-send -d -n 1 -f "luna://com.webos.service.oledepl/setPixelRefresherInfoList" '{ "jbLastTime": 2000, "subscribe": false }'
luna-send -d -n 1 -f "luna://com.webos.service.oledepl/setPixelRefresherInfoList" '{ "jbCount": 1, "subscribe": false }'

For good measure, I ran both on my model, and I was finally free from annoying Pixel Refresher reminder popups. Of course, this doesn't do anything to solve the major issue that appears to affect many CX models, including mine, with powering on the panel, but if, for whatever reason, you are looking at a means to tweak or disable the LG Pixel Refresher (on CX and similar models) now you know how to do it.

Oh and in case you want to explore what other commands the com.webos.service.oledepl provides, you can also issue:

ls-monitor -i com.webos.service.oledepl

It should also be noted that com.webos.service.oledepl is actually mapped to the /usr/sbin/eplmanager executable, which you can also run manually with the -d (Debug) option if you are feeling adventurous, but I REALLY wouldn't advise to do so, as you can probably break your TV beyond repair if you try things at random there, whereas the luna-send are assumed to have some form of validation...

2024-08-29

Reboot to UEFI firmware settings from UEFI Shell

reset -c -fwui

There, you have it.

2024-08-19

Adding EDK2 as a submodule without cloning it

Yeah, I have my reasons for this (mostly I commit on Windows, run the builds through GitHub Actions but compile from a non version-controlled Linux, and I don't want to suffer the wastage of yet another lengthy and cumbersome clone of EDK2).

This is mostly taken from https://stackoverflow.com/a/37378302/1069307:

mkdir edk2
git update-index --add --cacheinfo 160000 b158dad150bf02879668f72ce306445250838201 edk2
cat <<EOF >>.gitmodules
[submodule "edk2"]
	path = edk2
	url = https://github.com/tianocore/edk2.git
EOF

Of course, you should replace the commit hash with whatever current or stable EDK2 commit hash you want to point to.

And with this, you'll have added EDK2 as a submodule of your project without going through a cumbersome clone.

2024-06-18

Downloading signtool.exe from Microsoft

In their typical fashion, unless you know what you're doing, Microsoft made it incredibly difficult to get your hands on a simple basic executable, that they should by all means provide as an easily accessible download, since it's one of the basic brick to try and safeguard a Windows platform.

Well, we know what we're doing, which is to use a very handy technique that we picked up from actual malware, so, from PowerShell:

curl.exe -L -A "Microsoft-Symbol-Server/10.0.0.0" https://msdl.microsoft.com/download/symbols/signtool.exe/910D667173000/signtool.exe -o signtool.exe

There. Now you have signtool and you can get on with your life without having to download 4 GB of extra garbage.

2021-01-05

Python script to fix EDK2 patches downloaded with ThunderBird

It looks like ThunderBird and the EDK2 mailing list don't play too nice together, and you get annoying double line feeds being inserted into patches sent to the list, which are a major pain to deal with. And since I've grown tired of manually having to fix something like this:

 
Subject:
[edk2-platforms][PATCH 1/1] Platform/RaspberryPi: Fix Linux kernel panic on reset/poweroff
From:
Pete Batard <pete@akeo.ie>
Date:
2021.01.05, 14:09
To:
devel@edk2.groups.io

Commit 94e9fba43d7e132be3c582c676968a7f408072c1 introduced an unconditional
call to PcdGet32 after we exit boot services, that produces a kernel panic
on Linux reset.

This addendum to the previous commit ensures that we only read the PCD and
apply the delay while we are still in UEFI, which is what we want anyway as
the goal was to fix the storage of NV variables set by the user from within
the UEFI firmware interface.

Signed-off-by: Pete Batard <pete@akeo.ie>
---
 Platform/RaspberryPi/Library/ResetLib/ResetLib.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
index 4a50166dd63b..a70eee485ddf 100644
--- a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
+++ b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
@@ -52,13 +52,13 @@ LibResetSystem (
      * Only if still in UEFI.

      */

     EfiEventGroupSignal (&gRaspberryPiEventResetGuid);

-  }

 

-  Delay = PcdGet32 (PcdPlatformResetDelay);

-  if (Delay != 0) {

-    DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",

-            Delay / 1000000, (Delay % 1000000) / 100000));

-    MicroSecondDelay (Delay);

+    Delay = PcdGet32 (PcdPlatformResetDelay);

+    if (Delay != 0) {

+      DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",

+              Delay / 1000000, (Delay % 1000000) / 100000));

+      MicroSecondDelay (Delay);

+    }

   }

   DEBUG ((DEBUG_INFO, "Platform %a.\n",

           (ResetType == EfiResetShutdown) ? "shutdown" : "reset"));

-- 2.29.2.windows.2

Into this:
Subject: [edk2-platforms][PATCH 1/1] Platform/RaspberryPi: Fix Linux kernel panic on reset/poweroff
From: Pete Batard <pete@akeo.ie>
Date: 2021.01.05, 14:09
To: devel@edk2.groups.io

Commit 94e9fba43d7e132be3c582c676968a7f408072c1 introduced an unconditional
call to PcdGet32 after we exit boot services, that produces a kernel panic
on Linux reset.

This addendum to the previous commit ensures that we only read the PCD and
apply the delay while we are still in UEFI, which is what we want anyway as
the goal was to fix the storage of NV variables set by the user from within
the UEFI firmware interface.

Signed-off-by: Pete Batard <pete@akeo.ie>
---
 Platform/RaspberryPi/Library/ResetLib/ResetLib.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
index 4a50166dd63b..a70eee485ddf 100644
--- a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
+++ b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
@@ -52,13 +52,13 @@ LibResetSystem (
      * Only if still in UEFI.
      */
     EfiEventGroupSignal (&gRaspberryPiEventResetGuid);
-  }
 
-  Delay = PcdGet32 (PcdPlatformResetDelay);
-  if (Delay != 0) {
-    DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
-            Delay / 1000000, (Delay % 1000000) / 100000));
-    MicroSecondDelay (Delay);
+    Delay = PcdGet32 (PcdPlatformResetDelay);
+    if (Delay != 0) {
+      DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
+              Delay / 1000000, (Delay % 1000000) / 100000));
+      MicroSecondDelay (Delay);
+    }
   }
   DEBUG ((DEBUG_INFO, "Platform %a.\n",
           (ResetType == EfiResetShutdown) ? "shutdown" : "reset"));
-- 2.29.2.windows.2

Here's a quick Python script that'll automate that for you:

import argparse

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('files', type=argparse.FileType('rb+'), nargs='+')
    args = parser.parse_args()

    for file in args.files:
        buffer = bytearray(file.read())

        # Delete initial empty line
        while (buffer[0] == 0x0d) or (buffer[0] == 0x0a):
            del buffer[0]

        # Un-split Subject: CC: etc.
        for i in range(buffer.find(b'\x0d\x0a---')):
            if (buffer[i] == 0x3a) and (buffer[i+1] == 0x0d) and (buffer[i+2] == 0x0a):
                del buffer[i+1]
                buffer[i+1] = 0x20

        # Remove double CRLF from chunks
        i = buffer.find(b'\x0d\x0a@@')
        while i < len(buffer) - 3:
            if (buffer[i] == 0x0d) and (buffer[i+1] == 0x0a) and (buffer[i+2] == 0x0d) and (buffer[i+3] == 0x0a):
                del buffer[i]
                del buffer[i]
            i = i + 1
        file.seek(0)
        file.write(buffer)
        file.truncate()

2020-12-16

UEFI Hexdump

If you're developing UEFI firmware content, sooner or later you're going to want to dump binary data using the debug facility.

And so, without further ado:

(...)

#include <Library/BaseLib.h>
#include <Library/PrintLib.h>

(...)

STATIC
VOID 
DumpBufferHex (
  VOID* Buf,
  UINTN Size
)
{
  UINT8* Buffer = (UINT8*)Buf;
  UINTN  i, j, k;
  char Line[80] = "";

  for (i = 0; i < Size; i += 16) {
    if (i != 0) {
      DEBUG ((DEBUG_INFO, "%a\n", Line));
    }
    Line[0] = 0;
    AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "  %08x  ", i);
    for (j = 0, k = 0; k < 16; j++, k++) {
      if (i + j < Size) {
        AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "%02x", Buffer[i + j]);
      } else {
        AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "  ");
      }
      AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " ");
    }
    AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " ");
    for (j = 0, k = 0; k < 16; j++, k++) {
      if (i + j < Size) {
        if ((Buffer[i + j] < 32) || (Buffer[ i + j] > 126)) {
          AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), ".");
        } else {
          AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "%c", Buffer[i + j]);
        }
      }
    }
  }
  DEBUG ((DEBUG_INFO, "%a\n", Line));
}

2020-08-08

Updating XML files with PowerShell

Say you have the following file.xml:

<?xml version="1.0" encoding="UTF-8"?>
<data>
  <item name="Item 1" id="0" />
  <item name="Item 2" id="1001" />
  <item name="Item 3" id="0" />
  <item name="Item 4" id="1002" />
  <item name="Item 5" id="1005" />
  <item name="Item 6" id="0" />
</data>

And you want to replace all those "0" id attributes with incremental values.

If you have PowerShell, this can be accomplished pretty easily with the following commands:

$xml = New-Object xml
$xml.Load("$PWD\file.xml")
$i = 2001; foreach ($item in $xml.data.item) { if ($item.id -eq 0) { $item.id = [string]$i; $i++ } }
$xml.Save("$PWD\updated.xml")

Now your output (updated.xml) looks like:

<?xml version="1.0" encoding="UTF-8"?>
<data>
  <item name="Item 1" id="2001" />
  <item name="Item 2" id="1001" />
  <item name="Item 3" id="2002" />
  <item name="Item 4" id="1002" />
  <item name="Item 5" id="1005" />
  <item name="Item 6" id="2003" />
</data>

Easy-peasy...

2020-07-08

(Ab)using Microsoft's symbol servers, for fun and profit

Since I find myself doing this on regular basis (Hail Ghidra!), and can never quite remember the commands.

Say you have a little Microsoft executable, such as the latest ARM64 version of usbxhci.sys, that you want to investigate using Ghidra.

Of course, one thing that can make the whole difference between hours of "Where the heck is the function call for the code I am after?" and a few of seconds of "Judging by its name, this call is the most likely candidate" is the availability of the .pdb debug symbols for the Windows executable you are analysing.

You may also know that, because of the huge corporate ecosystem they have where such information might be critical (as well as some government pressure to make it public), it so happens that Microsoft does make available a lot of the debug information that was generated during the compilation of Windows components. Now, since it can amount to a large volume of data (one can usually expect a .pdb to be 3 to 5 times larger than the resulting code) this debug information is not usually provided with Windows, unless you are running a Debug/Checked build.

But it can "easily" be retrieved from Microsoft's servers. Here's how.

First of all, you need to ensure that you have the Windows SDK or Windows Driver Kit installed. If you have Visual Studio 2019 (remember, the Community Edition of VS2019 is free) with the C++ development environment, these should already have been installed for you. But really it's up to you to sort that out and alter the paths below as needed.

With this prerequisite taken care of, you should find a commandline executable called symchk.exe somewhere in C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\. This is the utility that can connect to the Microsoft's servers to fetch the symbol files, i.e. the compilation .pdb's that Microsoft has made public.

So, let's say we have copied our ARM64 xHCI driver (USBXHCI.SYS - Why Microsoft suddenly decided to YELL ITS NAME is unknown) to some directory. All you need to do to retrieve its associated .pdb then is issue the command:

"C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\symchk.exe" /s srv*https://msdl.microsoft.com/download/symbols /ocx .\ USBXHCI.SYS

The /s flag indicates where the symbols should be retrieved from (here the Microsoft's remote server) and the /ocx flag, followed by a folder, indicates where the .pdb should be copied (here, the same directory as the one where we have our driver).

If everything goes well, the output of the command should be:

SYMCHK: FAILED files = 0
SYMCHK: PASSED + IGNORED files = 1

with the important part being that the number of PASSED files is not zero, and you should find a newly created usbxhci.pdb in your directory. Neat!

"Hello, my name is Mr Snrub"


So, what do you do with that?

Well, I did mention Ghidra, and as a comprehensive disassembly/decompiler utility, Ghidra does of course have the ability to work with debug symbols if they happen to be available (sadly, it doesn't seem to have the ability to look them up automatically like IDA, or if it does, I haven't found where this can be configured), which helps turn an obtuse FUN_1c003ac90() function name, into a much more indicative XilRegister_ReadUlong64()...

For instance, let's say you happen to have been made aware that the reason why you currently can't use the rear USB-A ports for Windows 10 on the Raspberry Pi 4 is because Broadcom/VIA (most likely Broadcom, because they've already done everyone a number with implementing a DMA controller that chokes past 3 GB on the Bcm2711) have screwed up 64-bit PCIe accesses, and they end up returning garbage in the high 32-bit DWORD unless you only ever attempt to read 64-bit QWORDs as two sequential DWORDs instead of a single QWORD.

As a result of this, you may be exceedingly interested to find out if there exists something in the function calls used by Microsoft's usbxhci.sys driver, that can set 64-bit xHCI register accesses to be enacted as two 32-bit ones.

Obviously then, if, after using the .pdb we've just retrieved above, Ghidra helpfully tells you that there does exist a function call at address 1c003ac90 called XilRegister_ReadUlong64, you are going to be exceedingly interested in having a look at that call:

undefined8 XilRegister_ReadUlong64(longlong param_1,undefined8 *param_2)
{
  undefined8 local_30 [6];
  
  local_30[0] = 0;
  if (*(char *)(*(longlong *)(param_1 + 8) + 0x219) == '\0') {
    DataSynchronizationBarrier(3,3);
    if ((*(ulonglong *)(*(longlong *)(param_1 + 8) + 0x150) & 1) == 0) {
      // 64-bit qword access
      local_30[0] = *param_2;
    } else {
      DataSynchronizationBarrier(3,3);
      // 2x32-bit dword access
      local_30[0] = CONCAT44(*(undefined4 *)((longlong)param_2 + 4),*(undefined4 *)param_2);
    }
  } else {
    Register_ReadSecureMmio(param_1,param_2,3,1,local_30);
  }
  return local_30[0];
}

NB: The comments were not added by Ghidra. Ghidra may be good at what it does, but it's not that good...

Guess what? It so happens that there exists an attribute somewhere, that Microsoft uses the bit 0 of, to decide whether 64-bit xHCI registers should be read using two 32-bit access. Awesome, this looks exactly like what we're after.

The corresponding disassembly also tells us that this if condition is ultimately encoded as a tbnz ARM64 instruction. So if we revert that logic, by using a tbz instead of tbnz, we should be able to force the failing 64-bit reads to be enacted as 2x32-bit, which may fix our xHCI driver woes...

Let's do just that then, by editing USBXHCI.SYS and changing the EA 00 00 37 sequence at address 0x03a0d0 to EA 00 00 36 (tbnztbz) and, for good measure, do the same for XilRegister_WriteUlong64 at address 0x005b34, by also changing 0A 01 00 37 into 0A 01 00 36 to reverse the logic. "Yes that'll do".

"I like the way Snrub thinks!"


Well, we may have patched our driver and tried to fool the system by reversing some stuff, but, as the Simpsons have long attested, it's not going to do much unless you have a trusted sidekick to help you out.

Obviously, since we broke the signature of that driver the minute we changed even a single bit, we're going to have to tell Windows to disable signature enforcement for the boot on our target ARM64 platform, which can be done by setting nointegritychecks on in the BCD. And while we're at it we may want to enable test signing as well. Now, most of the commands you'll see are for the local BCD, but that's not what we are after here, since we want to modify a USB installed version of Windows, where, in our case, the BCD is located at S:\EFI\Microsoft\Boot\BCD. So the trick to achieving that (from a command prompt running elevated) is:

bcdedit /store S:\EFI\Microsoft\Boot\BCD /set {default} testsigning on
bcdedit /store S:\EFI\Microsoft\Boot\BCD /set {default} nointegritychecks on

However, if you only do that and (after taking ownership and granting yourself full permissions so that you can replace the existing driver) copy the altered USBXHCI.SYS to Windows\System32\drivers\ you will still be greeted by an obnoxious

Recovery

Your PC/Device needs to be repaired

The operating system couldn't be loaded because a critical system driver is missing or contains errors.

File: \Windows\System32\drivers\USBXHCI.SYS
Error code: 0xc0000221

Oh noes!


The problem, which is what generates the 0xc0000221 (STATUS_IMAGE_CHECKSUM_MISMATCH) error code, is that the optional PE checksum field, used by Windows to validate critical boot executables, has not been updated after we altered USBXHCI.SYS. Therefore checksum validation fails, and this is precisely what the Windows boot process is complaining about.

Fixing this is very simple: Just download PEChecksum64.exe (e.g. from here) and issue the command:

D:\Dis\>PEChecksum64.exe USBXHCI.SYS
USBXHCI.SYS: Checksum updated from 0x0008D39B to 0x0008D19B

For good measure, you will also need to self-sign that driver, so that you can avoid Windows booting into recovery mode with an obnoxious 0xc000000f from winload.exe (though you can still proceed to full boot from there).

Now we finally have all the pieces we need.

For instance, we can replace USBXHCI.SYS on a fast USB 3.0 flash drive containing an Raspberry Pi Windows 10 ARM64 installation created using WOR (and if you happen to have the latest EEPROM flashed as well as a version of the Raspberry Pi 4 UEFI firmware that includes this patch, you can actually boot the whole thing straight from USB), and, while we are at it, remove the 1 GB RAM limit that the Pi 4 had to have when booting from USB-C port (since we're not going to use that USB controller), by issuing, from an elevated prompt:

bcdedit /store Y:\EFI\Microsoft\Boot\BCD /deletevalue {default} truncatememory

Do all of the above and, who knows, you might actually end up with a usable Windows 10 ARM64 OS, running from one of the rear panel's fast USB 3.0 ports with a whooping 3 GB of RAM, on your Raspberry Pi 4.

Now, isn't that something?

But this is just a post about using Microsoft's symbol servers.
It's not a post about running full blown Windows 10 on the Raspberry Pi 4, right?

Addendum: In case you don't want to have to go through the taking of ownership, patching, updating of the PE checksum and digitally re-signing of the file yourself, may I also interest you in winpatch?

2020-06-23

Et tu, Microsoft

It's a beautiful Saturday afternoon.

Everything is going as peachy as could be, with the satisfaction of having released a new version of your software, just a couple days ago, that wasn't short lived due to the all too common subsequent realisation that you managed to introduce a massive "oops", such as including completely wrong drivers for a specific architecture (courtesy of Rufus 3.10) or having your ext formatting feature break when a partition is larger than 4 GB (courtesy of Rufus 3.8)... Sometimes I have to wonder if Rufus isn't suffering from the same curse as the original Star Trek movie releases (albeit inverted in our case).

Thus, basking in the contentment of a job well done, you fire up your trusty Windows 10, which you upgraded to the 2004 release just a couple weeks ago (along with that Storage Space array you use), and go on your merry way, doing inconsequential Windows stuff, such as deciding to rename one folder.

And that's when all hell breaks lose...

Suddenly, your file explorer freezes, every disk access becomes unresponsive and you are seeing your most important disk (the one that contains, among other things, all the ISOs you accumulated for testing with Rufus, and which you made sure to set up with redundancy using Storage Spaces, along with an ReFS file system where file integrity had been enabled) undergoing constant access, with no application in sight seemingly performing those...

Oh and rebooting (provided you are patient enough to wait the 10 minutes it takes to actually reboot) doesn't help in the slightest. If anything, it appears to make the situation worse as Windows now takes forever to boot, with the constant disk access issue of your Storage Space drive still in full swing.

Yet the Storage Spaces control panel reports that all of the underlying HDDs are fine, a short SMART test on those also reports no issue and even a desperate attempt to try to identify what specific drive might be the source of the trouble, by trying each combination of 3 our 4 HDDs, yields nothing. If nothing else, it would confirm the idea that Microsoft did a relatively solid job with Storage Spaces, at least in terms of hardware gotchas, considering that every other parity solution I know of, such as the often decried Intel RAID, would scream bloody murder if you removed another drive before it got through the super time consuming rebuilding of the whole array (which is the precise reason I swore off using Intel RAID and moved to Storage Spaces).

An ReFS issue then? If that's the case, talk of a misnomer for something that's supposed to be resilient...

Indeed, the Event Viewer shows a flurry of ReFS errors, ultimately culminating in this ominous message, that gets repeated many times as the system attempts to access the drive, as you end up finding that your drive has been "remounted" as RAW:
Volume D: is formatted as ReFS but ReFS is unable to mount it;
ReFS encountered status The volume repair was not successful...

Someone at Microsoft may want to look up the definition of resiliency...


Ugh, that's the second ReFS drive I lose in about a month (earlier was an SSD that hosted all my VMs, and that Windows mysteriously overwrote as a Microsoft Reserved Partition)! If that's indicative of a trend, I think that Microsoft might want to weather-test their data oriented solutions a little better. Things used to be rock-stable, but I can't say I've been impressed by Windows 10's prowess on the matter lately...

And yes, I do have some backups of course (well, I didn't for those VMs, but that was data I could afford to lose) but they are spread all over the place on account that I am not made of money, dammit!

See, the whole point of entrusting my data to a 10 TB parity array made of 4x4 TB HDDs was that I could reuse drives that I (more or less) had lying around, and you'd better believe those were the cheapest 4 TB drives I'd been able to lay my hands on. In other words, Seagate, since HDD manufacturers have long decided, or, should I say, colluded, that they should stop trying to compete on price, as further evidenced by the fact that I still paid less for an 8 TB HDD, two frigging years ago, than the cheapest price I could find for the exact same model today.

"Storage is getting cheaper", my ass!

Oh and since we're talking about Seagate and reliability, I will also state that, in about 20 years of using almost exclusively Seagate drives, on account that they are constantly on the cheaper side (though Seagate and other manufacturers may want to explain why on earth it is cheaper to buy a USB HDD enclosure, with cable, PSU and SATA ↔ USB converter, than the same bare model of HDD), I have yet to experience a single drive failure for any Seagates I use in my active RAID arrays.

So when people say Seagate is too unreliable, I beg to anecdotally differ since, for the price, Seagate's more than reliable enough. I mean, between paying exactly 0 € for 10 TB with parity vs. between 500 to 700 € (current price, at best) for a parity or mirrored NAS array, there's really no contest. I don't mind that a lot of people appear to have semi-bottomless pockets, and can't see themselves go with less than a mirroring solution with brand new NAS drives. But that's no reason to look down on people who do use parity along with cheap non NAS drives, because price is far from being an inconsequential factor when it comes to the preservation of their data...

And it's even more true here as the issue at hand has nothing to do with using cheap hardware and that everyone knows that a parity or mirroring solution is worth nothing if you don't also combine it with offline backups, which means even more disks, preferably of large capacity, and therefore even more budget to provision...

All this to say that there's a good reason why I don't have a single 8 or 10 TB HDD lying around, with all my backups for the array that went offline, and why, as much as I wish otherwise, there are going to be gaps in the data I restore... So yeah, count me less than thrilled with a data loss that wasn't incurred by a hardware failure or my own carelessness (the only ever two valid causes for losing data).

Alas, with the Windows 10 2004 feature update, it appears that the good folks at Microsoft decided that there just weren't enough ways in which people could kill their data. So they created a brand new one.

Enters KB4570719.

The worst part of it is that I've seen reports indicating that this, as well as other corollary issues, was pointed out to Microsoft by Windows Insiders as far back as September 2019. So why on earth was something that should instantly have been flagged as a super critical data loss issue, included in the May 2020 update?

Oh and of course, at the time of this post, i.e. about one month after the data-destructive Windows update was released, there's still no solution in sight... though, from what I have found, non extensible parity Storage Spaces may be okay to use, as long as these were created using PowerShell commands to make them non dynamically extensible, rather than through the UI which forces extensible.


If this post seems like a rant, it's because it mostly is, considering that I am less than thrilled at having had to waste one week trying to salvage what I could of my data. But since we need to conclude this little story, let me impart the following two truths upon you:

1. EVERYTHING, and I do mean EVERYTHING is actively trying to murder your data.
Do not trust the hardware. Do not trust yourself. And especially, do not trust the Operating System not to lounge a sharp blade straight through your data's toga, during the Ides of June.

2. (Since this is something I am all too commonly facing with Rufus' user reports) It doesn't matter how large and well established a software company is compared to an Independent Software Developer; the OS can still very much be the one and only reason why third party software appears to be failing, and you should always be careful never to consider the OS above suspicion. There is no more truth to "surely a Microsoft (or an Apple or a Google for that matter) would not to ship an OS that contains glaring bugs" today as there has been in the past, or as there will be in the future.
The OS can and does fail spectacularly at times (and I have plenty more examples besides this one, that I could provide). So don't fail to account for that possibility.

2020-05-15

Why is my Samba connection failing?

Or how nice it is to have a problem that has long eluded you finally explained.

Part 1: The horror

You see, I've been using a FriendlyArm/FriendlyElec RK3399-based NanoPC-T4 to run Linux services, such as a staging web server for Rufus, network print host, various other things as well as a Samba File Server...

However, this Samba functionality seemed to be plagued like there was no tomorrow: Almost every time I tried to fetch a large file from it, Windows would freeze during transfer, with no recovery and not even the possibility of cancelling unless the Samba service was restarted manually on the server.

But what was more vexing is that these problems with Samba did not manifest themselves until I switched from using an old Lubuntu distribution, that was provided by the manufacturer of that device, to a more up to date Armbian. With Lubuntu, Samba seemed rock-solid, but with Armbian, it was hopeless.


This became so infuriating that I had to giveup on using Samba on that machine altogether and, considering that things usually seemed to be okay-ish after the service had restarted, I dismissed it as a pure Samba/arch64 bug, that newer versions of Samba or Debian had triggered, and that would eventually get fixed. But of course, that long awaited fix never seemed to manifest itself and I had better things to do than invest time I didn't have trying to troubleshoot a functionality that wasn't that critical to my workflow.


Besides, the Samba logs were all but useless. Nothing in there seemed to provide any indication that Samba was even remotely unhappy. And of course, you can forget about Windows giving you any clue about why the heck your Samba file transfers are freezing...

Part 2: The light at the end of the tunnel

Recently however, in the course of the Raspberry Pi 4 UEFI firmware experiments, it turns out that I was using that same server to test UEFI HTTP boot of a large (900 MB) ISO, that was being served from the Apache server running on that NanoPC machine, and had no joy with getting the full transfer complete either. Except, there, it wasn't freezing. It just seemed to produce a bunch of TcpInput: received a checksum error packet before giving up on the transfer altogether...

URI: http://10.0.0.7/~efi/ubuntu.iso
File Size: 916357120 Bytes
Downloading...1%
TcpInput: received a checksum error packet TcpInput: Discard a packet TcpInput: received a checksum error packet TcpInput: Discard a packet TcpInput: received a checksum error packet TcpInput: Discard a packet TcpInput: received a checksum error packet TcpInput: Discard a packet TcpInput: received a checksum error packet TcpInput: Discard a packet TcpInput: received a checksum error packet TcpInput: Discard a packet HttpTcpReceiveNotifyDpc: Aborted! Error: Server response timeout.

Yet, serving the same content from the native python3 HTTP server (python3 -m http.server 80, which is a super convenient command to know as it acts as an HTTP server and serves any content from the current directory through the specified port) appeared to be okay, albeit with the occasional checksum errors. This is suddenly starting to look like a lot of compounded network errors... Could this be related to that Samba issue?


Now, the first thing you do when you get reports of TCP checksum errors, is try a different cable, a different switch and so on, to make sure that this is not a pure hardware problem. But I had of course tried that during the process of trying to troubleshoot the failing Samba server, and, once again, the results of switching equipment and cabling around were all negative.

But at least a bunch of checksum errors does give you something to start to work with.

For one thing, you can monitor these errors with tcpdump (tcpdump -i eth0 -vvv tcp | grep incorrect) and, more importantly, you may find some very relevant articles that point you to the very root of the problem.

Long story short, if tcpdump -i eth0 -vvv tcp | grep incorrect produces loads of checksum errors on the platform you serve content from, you may want to look into disabling offloading from the network adapter with something like:

ethtool -K eth0 rx off tx off

Or you may continue to hope that the makers of your distro will take action, but that might just turn out to be wishful thinking...

2019-11-17

PowerShell script to Convert UTF-8 misinterpreted file names

You'd think that somebody else would have come up with a quick script to do just that on Windows, but it looks like nobody else bothered, so here goes.

Here's the deal: You copied a bunch of files, and somewhere along the way, one of the applications screwed up and did not produce actual Unicode file names but instead misinterpreted the UTF-8 sequences as CodePage 1252, resulting in something dreadful like this:


And now you'd like to have a quick way to convert the 1252-interpreted UTF-8 to actual UTF-8. So you look around thinking that, surely, someone must have done something to sort this annoyance, but the only thing you can find is a UNIX perl script called convmv, which isn't really helpful. Why hasn't anyone crafted a quick PowerShell script to do the same on Windows already?

Well, it turns out that, because of PowerShell's limitations, and Windows' getting in the way of enacting a proper conversion of 1252 to UTF-8, producing such a script is actually a minor pain in the ass. Still, now, someone has produced such a thing:
#region Parameters
param(
 # (Optional) The directory
 [string]$Dir = "."
)
#endregion

# You'll need to have your console set to CP 65001 AND use NSimSun as your
# font if you want any hope of displaying CJK characters in your console...
[Console]::OutputEncoding = [System.Text.Encoding]::UTF8

$files = Get-ChildItem -File -Path $Dir -Recurse -Name

foreach ($f in $files) {
  $bytes = [System.Text.Encoding]::GetEncoding(1252).GetBytes($f)
  $nf = [io.path]::GetFileName([System.Text.Encoding]::UTF8.GetString($bytes))
  Write-Host "$f" → "$nf" # [$hex]
  # Must use -LiteralPath else files that contain '[' or ']' in their name produce an error
  Rename-Item -LiteralPath "$f" -NewName "$nf"
}

# Produce a "Press any key" message when ran with right click
$auxRegKey='\SOFTWARE\Classes\Microsoft.PowerShellScript.1\Shell\0\Command'
$auxRegVal=(get-itemproperty -literalpath HKLM:$auxRegKey).'(default)'
$auxRegCmd=$auxRegVal.Split(' ',3)[2].Replace('%1', $MyInvocation.MyCommand.Definition)
if ("`"$($myinvocation.Line)`"" -eq $auxRegCmd) {
  Write-Host "`nPress any key to exit..."
  $null = $Host.UI.RawUI.ReadKey('NoEcho,IncludeKeyDown')
}

If you save this script to something like utf8_rename.ps1 in the top directory where you have your misconverted files, and then use Run with PowerShell in the explorer's context menu, you should then see some output like this (provided your console is set to codepage 65001, a.k.a. UTF-8 and that you select a font that actually supports CJK characters, such as NSimSun (Microsoft will really have to explain how they have no trouble displaying CJK with NSimSun but still can't seem/want to do it with Lucida Console):


Eventually, your file names should have been converted to their expected value, and all will be well:



That is, until someone who thinks it's okay to not properly support UTF-8 absolutely EVERYWHERE (Hey Microsoft, how about some UTF-8 Win32 APIs already?) screws up and forces people to manually unscrew their codepage handling yet again...

Bonus

By the way if you're using Windows 10 19H1 or later, you should know that Microsoft finally added a setting to set the system codepage to UTF-8, which seems to finally improve on the failed codepage conversions that prompted the above script. Even as it says that it's in Beta, you may want to enable it:

2019-07-24

Installing Debian ARM64 on a Raspberry Pi 3 in UEFI mode


That's right baby, we're talking vanilla Debian ARM64 (not Raspbian, not Armbian) in pure UEFI mode, where you'll get a GRUB UEFI prompt allowing you to change options at boot and everything.

At long last, the Raspberry Pi can be used to install vanilla GNU/Linux distributions in the same manner as you can do on a UEFI PC. Isn't that nice?


Not that I don't like Raspbian or Armbian (as a matter of fact I am impressed by the very fine job the Armbian maintainers are doing with their distro), but I have now spent enough time helping with the UEFI Raspberry Pi 3 effort not to push this whole endeavour to its logical conclusion: Install vanilla ARM64 GNU/Linux distros. That's because, in terms of long term support and features, nothing beats a vanilla distro. I mean, what's the point of having an 64-bit CPU if the distro you're going to install forces you to use 32-bit?

Prerequisites

Hardware:

  • A micro SD card with sufficient space (16 GB or more recommended). You may also manage with a USB Flash Drive, but this guide is geared primarily towards SD card installation
  • A Raspberry Pi 3 (Model B or Model B+) with a proper power source. If you're ever seeing a lightning bolt on the top left of your display during install, please invest into a power supply that can deliver more wattage.
Note that our goal here is to install the system on a SD card, through netinstall, using a single media for the whole installation process.

In other words, there is no need to use an additional USB Flash Drive, as we could do, to boot the Debian installer and then install from USB to SD. This is mostly because it's inconvenient to have to use two drives when one can most certainly do, and also because while USB to SD may look easier on paper (no need to fiddle with the "CD-ROM" device for instance) it's actually more difficult to complete properly.

Thus, while I'll give you some pointers on how to perform a USB based installation in Appendix D, I can also tell you, from experience, that you are better off not trying to use a separate USB as your installation media and instead performing the installation from a single SD, as described in this guide.

Software:

  • The latest Raspberry Pi 3 UEFI firmware binary, along with the relevant Broadcom bootloader support files (i.e. bootcode.bin, config.txt, fixup.dat, start.elf).

    You can find a ready-to-use archive with all of the above at https://github.com/pftf/RPi3/releases
    (RPi3_UEFI_Firmware_v#.##.zip, 3 MB).

    Note that this firmware archive works for both the Raspberry Pi 3 Model B and the Raspberry Pi 3 Model B+ (as the relevant Device Tree is automatically selected during boot).
  • (Optional) The non-free WLAN firmware binaries that are needed if you want to use Wifi for the installation.
    Note that, if you picked up the archive above then you don't need to do anything as the WLAN firmware binaries are included in it too.

Preparation


Note: a complete example of how to achieve the first 3 steps below using DISKPART on Windows or fdisk + mkfs on Linux is provided in Appendix A at the end of this post.
  • Partition your SD media as MBR and create a single partition of 300 MB of type 0x0e (FAT16 with LBA).
    Do not be tempted to use GPT as the partition scheme or 0xef (ESP) for the partition type, as the ondie Broadcom bootloader does not support any of those. It must be MBR and type 0x0e. You can use the command line utilities fdisk on Linux or DISKPART on Windows to do that.
  • Set the partition as active/bootable. This is very important as, otherwise, the Debian partition manager will not automatically detect it as ESP (EFI System Partition) which will create problems that you have to manually resolve (See Appendix C).
    If using fdisk on Linux, you can use a to set the partition as active.
    If using Windows, you can use DISKPART and then type the command active after selecting the relevant disk and partition.
  • Format the partition as FAT16. It MUST be FAT16 and not FAT32, as the Debian partition manager will not detect it as ESP otherwise and, again, you will have to perform extra steps to salvage your system before reboot (Appendix C).
    The Linux and Windows base utilities should be smart enough to use FAT16 and not FAT32 for a 300 MB partition, so you should simply be able to use mkfs.vfat /dev/<yourdevice> (Linux) or format fs=fat quick in Windows' DISKPART. The Windows Disk Manager should also be smart enough to use FAT16 instead of FAT32 if you decide to use it to format the partition.
  • Extract the UEFI bootloader support files mentioned above to the newly formatted FAT partition. If you downloaded the Raspberry Pi 3 UEFI firmware binary from the link above, you just have to uncompress the zip file onto the root of your media, and everything will be set as it should be.
  • Extract the content of the Debian ISO you downloaded to the root of the FAT partition. On Windows you can use a utility such as 7-zip to do just that (or you can mount the ISO in File Explorer then copy the files).
Once you have completed the steps above, eject your SD card, insert it in your Pi 3 and power it up. Make sure no other media is plugged in besides the SD card. Especially, make sure that there aren't any USB Flash Drives or USB HDDs connected.

Initial Boot


Unless you did something wrong, you should see the multicoloured boot screen, which indicates that the Raspberry Pi properly detected your SD media and is loading the low level CPU bootloader from it.

Then you should see the black and white Raspberry logo, which indicates that the Raspberry Pi UEFI firmware is running.



Wait for the GNU GRUB menu to appear (which it should do by default after the Pi logo disappears) and choose *Install (which should already be the default) and let the Debian installer process start.

Debian Installer


Note: In case anything goes wrong during install, remember that you can use Alt-F4 to check the current installation log for details about the error.
  • Select your Language, Country and Keyboard and let the installer proceed until it reports that No Common CD-ROM drive was detected.
  • At this stage, on Load CD-ROM drivers from removable media select No.
  • On Manually select a CD-ROM module and device select Yes.
  • On Module needed for accessing the CD-ROM select none.
  • On Device file for accessing the CD-ROM type exactly the following:

    -t vfat -o rw /dev/mmcblk0p1

    For the reasons why you need to type this, see Appendix B below.
  • With the "CD-ROM" device set, let the installation process proceed and retrieve the base packages from the media until it asks you for the non-free firmware files on the network hardware detection. If you plan to use the wired connection, you can skip the (Optional) step below.
  • (Optional) If you plan to use WLAN for the installation, choose Yes for Load missing firmware from removable media. If you created the media from that Raspberry Pi 3 firmware archive linked above, the relevant firmware files will be detected under the firmware/ directory.

    Note 1: Because there are multiple files to load, you will be prompted multiple times for different firmware files (look closely at their names, you will see that they are actually different). This is normal. Just select Yes for each new file.

    Note 2: Though they are included in the UEFI firmware zip archive we linked above, it is most likely okay not to provide the .clm_blob if you don't have it (the Wifi drivers should work without that file), so don't be afraid to select No here if needed.
  • Set up your network as requested by the installer by (optionally) choosing the network interface you want to use for installation and (also optionally) setting up your access point and credentials if you use Wifi.
  • Go through the hostname, user/password set up and customize those as you see fit.
  • Let the installer continue until you get to the Partition disks screen. There, for Partitioning method select Manual. You should see something like this:

    MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
         #1  primary  314.6 MB  B  K  ESP
             pri/log                  FREE SPACE

    If, instead, you see something like this:

    MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
         #1  primary  314.6 MB  B   fat16
             pri/log                FREE SPACE

    In other words, if you don't see B K ESP for the first partition, then it means that you didn't partition or format your drive as explained above and you will need to reference Appendix C (Help, I screwed up my partitioning!) to sort you out.
  • From there select the FREE SPACE partition and use the partition manager's menu to create two new primary partitions (one for swap and one for the root file system), until you have something like this:

    MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
         #1  primary  314.6 MB  B  K  ESP
         #2  primary    1.0 GB     f  swap    swap
         #3  primary   14.7 GB     f  ext4    /
    
  • Select Finish partitioning and write changes to disk and then Yes on Write the changes to disks? and let the installer continue with the base system installation.
  • After a while, the installer will produce a big red ominous message that says:

    [!!] Configure the package manager
      
    apt-configuration problem
    An attempt to configure apt to install additional packages from the CD failed.

    This, however, is actually a completely benign message and you can safely ignore it by selecting Continue . That's because, since we are conducting a net install, we couldn't care less about no longer being to access the "CD-ROM" files after install...
  • Once you have dimissed the message above, pick the mirror closest to your geographical location and let the installer proceed with some more software installation (this time, the software will be picked from that network mirror rather than the media).
    When prompted for the "package usage survey" pick whichever option you like.
  • Finally, at the Software selection screen, select any additional software package you wish to install. Note that the "Debian desktop environment" should work out of the box if you decide to install it (though I have only tested Xfce so far). It's probably a good idea to install at least "SSH server".
  • Let the process finalize the software and GRUB bootloader installation and, provided you didn't screw up your partitioning (i.e. you saw B K ESP when you entered the partition manager, otherwise see Appendix C) select Continue to reboot your machine on the Installation complete prompt.

If everything worked properly, your system will now boot into your brand new vanilla Debian ARM64 system. Enjoy!

Post install fixes


Here are a few things that you might want to fix post install:
  1. You may find a cdrom0 drive on your desktop, which can't seem to be accessible. This is a leftover from the installer process not knowing how to handle the installation media device. You should edit /etc/fstab to remove it.
     
  2. If you installed the cups package, you may get an error while loading modules (systemctl --failed will report that systemd-modules-load.service is in failed state). This is all due to the current cups package trying to load IBM PC kernel modules... on a non PC device. To fix this, simply delete /etc/modules-load.d/cups-filters.conf and reboot.
  3. If using UEFI firmware v1.6 or later, you can enable the serial console by editing /etc/default/grub and changing GRUB_CMDLINE_LINUX="" to GRUB_CMDLINE_LINUX="console=ttyS0,115200", and then running update-grub.
    You may also enable serial console access for GRUB by adding the following in the same file:
    GRUB_TERMINAL=serial
    GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --stop=1" 

Appendix A: How to create and format the SD partition for installation


IMPORTANT NOTE 1: Please make sure to select the disk that matches your SD media before issuing any of these commands. Using the wrong disk will irremediably destroy your data!

IMPORTANT NOTE 2: Do not be tempted to "force" FAT32 in DISKPART or mkfs and do not forget to set the bootable/active flag, else you will afoul of the issue described in Appendix C. 

Windows

C:>diskpart

Microsoft DiskPart version 10.0.18362.1

Copyright (C) Microsoft Corporation.
On computer: ########

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          238 GB      0 B        *
  Disk 1    Online          465 GB  1024 KB        *
  Disk 4    Online         4657 GB  1024 KB        *
  Disk 5    Online         4649 GB      0 B        *
  Disk 6    Online           14 GB    14 GB

DISKPART> select disk 6

Disk 6 is now the selected disk.

DISKPART> clean

DiskPart succeeded in cleaning the disk.

DISKPART> convert mbr

DiskPart successfully converted the selected disk to MBR format.

DISKPART> create partition primary size=300

DiskPart succeeded in creating the specified partition.

DISKPART> active

DiskPart marked the current partition as active.

DISKPART> format fs=fat quick

  100 percent completed

DiskPart successfully formatted the volume.

DISKPART> exit

Leaving DiskPart...

C:>

Note, if needed you can also force a specific partition type (e.g. set id=0e to force FAT16 LBA), but that shouldn't be needed as DISKPART should set the appropriate type accordingly.

Linux


The following assumes /dev/sdf is your SD/MMC device. Change it in all the commands below to use your actual device.

(Optional) If your drive was partitioned as GPT, or if you're not sure, you may want to issue the two following commands first. If it's MBR you can skip this step:

# Delete the primary GPT:
dd if=/dev/zero of=/dev/sdf bs=512 count=34
# Delete the backup GPT.:
dd if=/dev/zero of=/dev/sdf bs=512 count=34 seek=$((`blockdev --getsz /dev/sdf` - 34))

Now use fdisk and mkfs to partition the drive:

root@debian:~# fdisk /dev/sdf

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x7d188929.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-31291391, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-31291391, default 31291391): +300M

Created a new partition 1 of type 'Linux' and of size 300 MiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): e
Changed type of partition 'Linux' to 'W95 FAT16 (LBA)'.

Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@debian:~# mkfs.vfat -F 16 /dev/sdf1
mkfs.fat 4.1 (2017-01-24)
root@debian:~#

Appendix B: Why do we need to use -t vfat -o rw /dev/mmcblk0p1 as the CD-ROM device?

  • Why this weird device name with options? Because these are actually mount command line parameters and the Debian installer actually calls mount behind the scenes and feeds it exactly what we write here. This means we can hijack the device name field to invoke the additional mount parameters we need.
  • Why /dev/mmcblk0p1? That's simply name of the device for the first partition (p1) on the SD/MMC media (mmcblk0) as seen by the Linux kernel on a Raspberry Pi.
  • Why -t vfat? Because the Debian installer appends fstype=iso9660 to the mount option which prevents automount and forces us to override the file system type.
  • Why -o rw? Because the Debian installer won't be able to use the first partition for /boot/efi otherwise or load the WLAN firmware from the media (you get a device or resource busy when trying to remount the media).

Appendix C: Help I screwed up my partitioning!


Of course you did. You thought you knew better, and now you are paying the price...

The problem in  a nutshell is that:
  1. You can't use a regular ESP on a Raspberry Pi, on account that GPT or an MBR partition with type 0xef are not handled by the Broadcom CPU bootloader. And there is nothing you can do about this, because this is a behaviour that's hardcoded in the CPU silicon itself.
     
  2. The Debian installer's partition manager is very temperamental about what it will recognize as an ESP. In other words, if you don't use the perfect combination of boot flag, partition type and file system, it will fail to see it as an ESP.
Now the good news is that this is recoverable, but you need to know what you're doing.
  • The first thing you should do in the Debian partition manager is set the first partition to be used as ESP. In other words, you will need to edit the first partition until you get this:
    MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
         #1  primary  314.6 MB  B  K  ESP
             pri/log                  FREE SPACE
  • Then you can proceed as the guide describe, but you need to bear in mind that, as soon as you choose to write the partition changes, the partition manager will have changed your first partition type to 0xef, which, as we have seen is ubootable by the CPU. Therefore, DO NOT PROCEED WITH THE SYSTEM REBOOT AT THE END UNTIL YOU HAVE CHANGED THE PARTITION TYPE BACK.
  • To do that, once you get to the Installation complete prompt that asks you to select Continue to reboot, you need to press Alt-F2 then Enter to activate a console.
  • Then type exactly the following command:
    chroot /target fdisk /dev/mmcblk0
    Then press the keys t, 1, e, w
  • No you can go back to the installer console (Alt-F1) and select Continue to reboot

Appendix D: Installing to a SD from an USB Flash Drive


As I explained above, and though it may seem simpler, I would discourage to use this method to install Debian on a Raspberry Pi. But I can understand that, if you don't have a card reader, you may be constrained to using this method.

For the most part, this should work fine out of the box. As a matter of fact, if you do it this way, you won't have to fiddle with the "CD-ROM" media detection. However, I will now list some of the caveat you'll face if you proceed like:

Caveat 1: If you use guided partitioning your SD/MMC media will be formatted as GPT (because this is a UEFI system after all) which the Broadcom CPU used in the Raspberry Pi can not boot. It has to be MBR. How you are supposed to force MBR over GPT in the Debian partition manager, I'll let you figure out.

Caveat 2: Similarly, you need to go through the 0xef to 0x0e conversion of your ESP, as the Pi won't boot from that partition otherwise.

Caveat 3: Of course you will also need to duplicate all the bootcode.bin, fixup.bat and so on from your USB boot media onto the SD ESP partition if you want it to boot (which is the reason why is is much more convenient to just set the ESP and Debian installer on the SD right of the bat, so you don't risk forgetting to copy a file).

Caveat 4: When I tried USB to SD install, I found that the GRUB installer somehow didn't seem to create an efi/boot/bootaa64.efi, which, if left uncorrected, will prevent the system from booting automatically.

2018-10-31

GitHub verified commits with GPG, TortoiseGit and MSYS/MinGW

If you've been browsing git repositories in GitHub, you may have seen that some of them have Verified commits, which is a nice way to indicate that the person who actually committed the code is indeed who they say they are, and not an impersonator who just happened to reuse an e-mail address that is not theirs, for dubious reasons.

Typical display of "Verified" GPG commits in GitHub


Obviously, if you are the only person who has write access to your github repositories (which is how I tend to operate, for obvious security reasons) verified commits are not that much of a big deal. Still, having the badge show in github does help with ensuring that people who are browsing the repo know that you are taking security and trust seriously. So we might as well add commit signing, since it's pretty straightforward to do.

Now, since these are my main development tools, I will hereafter demonstrate how you can do that using TortoiseGit and MSYS/MinGW GPG on Windows. If you use something else, then you will have to look for post entries by other people, that match the tools you use. Also, to give credit where credit is due, I will point out that I am mostly copying Julian's dev.to entry titled "Sign your git commits with tortoise git on windows".

So, without further ado, here's how you should proceed:
  1. Create a new GPG key by firing up a MinGW prompt and issuing the following:

    $ gpg --full-generate-key --allow-freeform-uid
    gpg (GnuPG) 2.2.10-unknown; Copyright (C) 2018 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    
    gpg: keybox '/home/nil/.gnupg/pubring.kbx' created
    Please select what kind of key you want:
       (1) RSA and RSA (default)
       (2) DSA and Elgamal
       (3) DSA (sign only)
       (4) RSA (sign only)
    Your selection? 1
    RSA keys may be between 1024 and 4096 bits long.
    What keysize do you want? (2048) 4096
    Requested keysize is 4096 bits
    Please specify how long the key should be valid.
             0 = key does not expire
          <n>  = key expires in n days
          <n>w = key expires in n weeks
          <n>m = key expires in n months
          <n>y = key expires in n years
    Key is valid for? (0) 0
    Key does not expire at all
    Is this correct? (y/N) y
    
    GnuPG needs to construct a user ID to identify your key.
    
    Real name: Pete Batard
    Email address: pete@akeo.ie
    Comment:
    You selected this USER-ID:
        "Pete Batard <pete@akeo.ie>"
    
    Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
    We need to generate a lot of random bytes. It is a good idea to perform
    some other action (type on the keyboard, move the mouse, utilize the
    disks) during the prime generation; this gives the random number
    generator a better chance to gain enough entropy.
    We need to generate a lot of random bytes. It is a good idea to perform
    some other action (type on the keyboard, move the mouse, utilize the
    disks) during the prime generation; this gives the random number
    generator a better chance to gain enough entropy.
    gpg: /home/nil/.gnupg/trustdb.gpg: trustdb created
    gpg: key F3E83EBB603AF846 marked as ultimately trusted
    gpg: directory '/home/nil/.gnupg/openpgp-revocs.d' created
    gpg: revocation certificate stored as '/home/nil/.gnupg/openpgp-revocs.d/236D8595DE48618C26293122F3E83EBB603AF846.rev'
    public and secret key created and signed.
    
    pub   rsa4096 2018-10-31 [SC]
          236D8595DE48618C26293122F3E83EBB603AF846
    uid                      Pete Batard <pete@akeo.ie>
    sub   rsa4096 2018-10-31 [E]
    

    You'll notice that, when prompted, we chose to create a 4096 RSA and RSA key that never expires.

    During that process, you will also be prompted to enter the password that safeguards your key. This is the password you will have to enter each time you sign a new commit, so choose it wisely.

    Note that, when using MSYS2 + MinGW, your GPG keys will be stored under C:\msys2\home\<your_user_name>\.gnupg\.
     
  2.  Generate the public key in a format that GitHub can accept:

    $ gpg --armor --export pete@akeo.ie
    -----BEGIN PGP PUBLIC KEY BLOCK-----
    
    mQINBFvZ0+gBEAC7Jkdt3aW5iURti+36suQN9dmhGfVJMEV/Y9giby78wYcq51rj
    IvJ2AuYEhVgiFwT2hrlKuems0Jsln6wGUULAQXpLMU4XxlyKHwBE3ETXCXWQbzxH
    rNqerDKNu54M/r3XNCW7r38vwNdYrh656eLccZ/jOH8aSSZ9KkBjJ1wa78tx7YZy
    +FXXjDbamP3Pu3CPp7Nx3y69FCFm2uYrDkLWqcOvweME9imIqdsLfd5bM+wYclbN
    QQuZArV7uoQ2xYFlVweaob5U3iUsGUQYuY7x3Mlbz/73wYxuOGUt5n6de3tdefrN
    V5csD3aJVQKjFWOW2oNzI8Qik9pDie+3XQEfbIVHhgCx9kLVe2MzBaWrnPgk2Epj
    bIhRheqzvV15iC70QchMrtDzXOcbNhaytggYWPRx1YtEN3G4pPnsVfq0oSdNhwlw
    VLYm6eK+kjr0PykIANiiDDe/4WiFTIS1mobp++QCFXm41jtfXP6PM3NJdf1Hx5VX
    CcRQKXmukeyW4DfYtr9GoKeu9G1vGQev1U+qjtOk+9SRrofsqfCqzJP4drjbSyk9
    43q9HBYSBjnslisQnrhhcl5/5Yb99+sS2EnpW7am/sarCHGiPkLi6eHfYpbxX7Lg
    nLXjmXYlpyCkJnkgwzsTUs3+7w2KHaBZ7yme70x2edBD9f1Ar3zm+ryW5QARAQAB
    tBpQZXRlIEJhdGFyZCA8cGV0ZUBha2VvLmllPokCTgQTAQgAOBYhBCNthZXeSGGM
    JikxIvPoPrtgOvhGBQJb2dPoAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJ
    EPPoPrtgOvhGUpkQAIHSu7BNo4/jUhtHjBMiiYVE6eJh1J8+lWkXuATCxo3BXrMb
    AAAdNsrPca09NVdSli3xameKSnWt3hXRpkNM2cAC/Sus8UjYGDaCP1pWNyfmd70y
    /uAZGf1FIeWL4yIiFcDROobLqlCE+qViWu8sG2Ris8hGA8sjR0cn5891Q/ncHFtE
    YYHzh0mn+A9I/gGSvArqYJdNNBptGplo2fnQQIODwHNYSPMCzBawFoll6jocjg6q
    FqlawC5f9zPs5HP9k0k0pp37f8i+ANftCfdwOEWurfBDGqrxKiJIyIaS9kLzwCQX
    poJGZO/rVbCDGvexfVkqoKMJRK0jO2Rh3p0vifZ2cwKPSFjWfSjUiPAUpcz0nuV5
    BSkrMNc1VHgP1FM4v2Vpi7lnaoWMLpVz3VJ8yRyRD/7c7oVEl0NL8lHMZaHiPprf
    LmeLIgM5ndh9wkvD9j2EH5JR72lACQtg5n9qmbDro2uJbtGqrhqrVQdPrPtv1XoM
    0JAIL+1RvdTuPPBclmTLwdXaztlnEjJOA9loWpkyMIlZVcb/6TWamGAzxu4wMv8o
    aQpaVqNIO9kq79lZMHFGDE4VRHAjrJh3nXKpi+/JOIf7xKAnwrZAquAC+bfqYYUm
    W9jg35aB+jASlI7+TvQHgal2dFSYebCeWpwPlJr7XeXWJab+UNajeKxRQ2wMuQIN
    BFvZ0+gBEAC6nJAWbF6YAnPDaHTTBAAYEHlbiPTt8gYUgoxkUJxV2fcj0g2ye0+x
    gFh7Z3eTw5zq3iojah8EWBj5WOHeI1R1q244qaje467onbgowcxsFOH/TgBs1aew
    DWNDIMJl/vkSEY5xdmtJIGIUJ/+BH9U7kSX3lB5IFz37WH6hcgQZUjD0fx+Hv5ZX
    7Fz8YGXnBnJRwblCJbvkq2BD/1fSI5REddILkQAKd9mzRoXFvKRYwV5Oq78NU4cd
    5e20+ALHCPC7fQQ3jFzUo2WMLywWDAi42DOn7E6/tIZT7BwKF08ozNDPpWTj5OOO
    OAqjesgsXI410kdayv25LopHnnPCcIcjm35AtA8TDSEfPFlbm59tBo7q5VWi15yb
    X1+vkSZfcUoe9lXIr/Ea+RYgayI8xFkBiOlWn8NaWjWrZEr6OG4EOk97bAgey4M4
    KEJJkQsQYsVSQ8yVkt1wETkH6GHQFoyoFJUJkxeWDXoG9LyBYr7n+NSbjOAujy/c
    XyemCFkJXSeTcn4KAIboBvEV0nQOMjfaEr+hkfXbESfm92MSlL54arrgyY7vcOSI
    iztc4ZiTmkQPeeG4PsqUaHYB1lj+qapVQlZ9O+OFH280YWylLBZJMWOKM1lMqgz3
    Z2avF2FVax+xBeE8pMnWAUbKTHB7BQAhATjxGGlWy6QtJRxpOrTcGwARAQABiQI2
    BBgBCAAgFiEEI22Fld5IYYwmKTEi8+g+u2A6+EYFAlvZ0+gCGwwACgkQ8+g+u2A6
    +EbNAQ//WL261oYfKskEmBzz88M7Tt6aj8NyQmXyrIY6RoEYK4+rnS2zFwQfIF6p
    3e4avUZYF5xTOSuuiJv4IImnjlilHjA+r6LcmqIGKilIeFQwyNLVr+H/FvZSzKYY
    Psr6v0CCBn/6UICmrLoDgr1IiWmlwVDKVNXDZLGHprB00WBrso0pBVWEmbkKzlP9
    lYlC11yXo/wsLLnQNbz3DzcUgtyFExyL37EGr1zw2xfmwmRZRQmpILpuiBE/VGI0
    pH4JReeGjcqh0TkK+70whQnM9VX6eZbV4cwtBXg1CixY+cwyQcCreRTneGPQT9jj
    5dmD9duQOiDw5QGAoQ4tc6AxQcf62KsZmXQ715IMVrbn3leeoVR5PaFQ/PR3MQn+
    eS0f+wIDLBgD1tjUeOvjWs79sB7LAvinndZUA/6+nfxR29753gpssFW5tFEK5Kit
    OwCnNG4P3SjqfYAN+IIBTUUUPjGPHTKEd85XUBUlCJg7i1iLaeZqamp9oga4gv6d
    lLQ50J84i4yk02Afhlic5CNw1l9TfCgdFWF/9+WO7qzHmdJsZl/9Gs05J3hbPzqh
    uji6ujyI7v9vDTDC2tR1l3zHTomFJ6Vs42MdpaBWtnePAIohnhtLKCjG3/Z04idj
    jjGTV+5EASM2h3WV7vfmxem2HyxEM0lwa5zj8AtaWugqmiO6Rik=
    =aMFF
    -----END PGP PUBLIC KEY BLOCK-----
    
     
  3. Go to https://github.com/settings/gpg/new and copy/paste the public key data from above.
     
  4. Because we are going to call GPG independently of MinGW, now you must copy your C:\msys2\home\<your_user_name>\.gnupg directory to C:\Users\<your_user_name>\.

    This is needed because this is the default location gpg.exe looks for key when not invoked from msys/MinGW and it doesn't seem possible to alter it without modifying the registry or creating environment variables, which is cumbersome. Besides, this is important data and you are a lot more likely to backup the content of C:\Users\<your_user_name>\ than C:\msys2\home\, so it's probably not a bad idea to duplicate this valuable content there.
     
  5. Get the key id that you'll need to use in your config file with:
    $ gpg --list-keys --keyid-format LONG pete@akeo.ie
    gpg: checking the trustdb
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
    pub   rsa4096/F3E83EBB603AF846 2018-10-31 [SC]
          236D8595DE48618C26293122F3E83EBB603AF846
    uid                 [ultimate] Pete Batard <pete@akeo.ie>
    sub   rsa4096/308A9C6106D2FCE4 2018-10-31 [E]

    The 40 characters hex string under pub is the value you are after.
     
  6. In each project where you want to have signed commits, edit your .git/config so that it contains the following options:
    [user]
        signingkey = 236D8595DE48618C26293122F3E83EBB603AF846
    [commit]
        gpgsign = true
    [gpg]
        program = "C:/msys2/usr/bin/gpg.exe"
If you do the above correctly, then next time you commit into the git repo you modified, you should be prompted for your GPG key password, and, after you push to GitHub, you should find that the commit has the Verified badge.

Note that you can also validate whether your commit was properly signed, before pushing, by issuing:
$ git log --show-signature

2017-05-17

Using a YubiKey to store code signing certificates

Preamble (skip this if you only want the How To)

If you are a Windows software developer and/or distributor, then, by all means, you are well aware that you should always digitally sign your software, so that a minimum level of accountability and trust can be established between yourself and your users.

As you should also know, this process is usually accomplished by acquiring a Windows Authenticode credential (a credential is a certificate + its associated private key) which can then be used to digitally sign binary executables.

However, one must also consider the security aspect of the signing process and realize that, given the faintest opportunity, ill-intentioned people will try to grab your code signing credentials, if they can. For instance, perhaps you are already aware that the NSA's stuxnet virus was signed using credentials that were stealthily duplicated from JMicron and Realtek, and that, outside of state sponsored endeavours, malware authors are also exceedingly interested in acquiring data they could use to steal the identify of a trustworthy person, even more so if that person or entity is producing popular software.

(Image credits: Yubico.com)
This means that, should malware find its way on your development machine (as part of an infected development tool for instance which malware authors are likely to target if they can, as it can mean a huge payoff), it'll most likely be able to steal BOTH your credential and its private key password, since one can only expect semi-competent malware to implement both a disk scanning and a keylogging facility.

Therefore, as a code signing developer, you're only ever one dodgy software installation away from finding that your credential(s) and protectivve password(s) have been exfiltrated into very wrong hands indeed...

As a result, it doesn't take a paranoid person to realize that storing credentials on disk, even if it's a removable USB flash drive, that you only plug when signing binaries, is a very very bad idea. Similarly, if your alternative is to store your signing credentials into the Windows certificate store, and expect that it'll be enough, you should probably realize that the level of a software-only security solution on Windows goes about as far as the distance you can throw a chair... with Steve Ballmer sitting on it.

Thus, what you really want, is have your credentials stored on a dedicated removable device, that's designed precisely to protect that kind of stuff. And this is where FIPS 201 Personal Identity Verification (PIV) devices, and especially the convenient and relatively affordable YubiKeys from Yubico come into play.


For the record, most of the process described below can likely be applied to any FIPS 201 PIV device, but since YubiKeys are what I use, I'll focus only on YubiKey usage.

Prerequisites


First of all, it is important to note that not all YubiKeys are created equal. Especially, the cheapest YubiKey model does NOT have PIV support. Thus, if you plan to use a YubiKey for the purpose of signing code, you should steer away from the FIDO U2F Security Key model, as it is incompatible with this procedure. With this being said, the prerequisites are as follow:
  • Any YubiKey model EXCEPT the FIDO U2F Security Key. My preference goes for the YubiKey 4, but anything that has the PIV feature will do.
  • Your code signing credentials, which you have obtained from your Certification Authority, temporarily saved as a .p12 file (Note: You may have to use the Windows certificate store export feature to get to that file, and follow the procedure highlighted here, if your CA only delivers signing credentials into the certificate store)
  • The latest version of YubiKey PIV Manager, which you should download and install from here.

Storing your code signing credential into a YubiKey

  1. Open PIV Manager (pivman.exe). You may have to go fetch it from its installation directory if it did not create a Start menu entry, as was the case on my machine:


  2. Plug your YubiKey. If this is the first time you use it, you will be greeted by the following screen asking you to set a PIN:

    Under "Management Key" you should keep the "Use PIN as key" option checked.

    On the other hand, since you're going to use that key for code signing on Windows, you can disregard the cross-platform compatibility recommendation, as I haven't seen any issues with using a PIN with extended alphanumeric characters on Windows, and, with a length of 8 characters, the PIN is already short enough as it is.

    One thing I should point out is that, just like with a credit card, the device only gives you 3 attempts at entering the right PIN before locking itself (which is exactly what you want from a device that stores valuable data) so keep that in mind when you use it. Of course, a YubiKey can always be reset if locked, but you will lose access to the credentials stored on it.

  3. Once you have set the PIN, you should see the following screen, where you need to click the "Certificates" button:

  4. On the Certificates screen, select the "Digital Signature" tab:

  5.  Click "Import from file" and select your .p12 code signing credential. You will be prompted by a password, which of course is the password for the private key of your .p12 (and not the key's PIN).

  6. If everything goes well, you will see the following notice, which you should follow by unplugging your YubiKey:

  7. After re-plugging your YubiKey, and going back to the "Digital Signature" certificate, you should see details about the installed credential, which is ready to be used for code signing:

Bonus: Storing more than one code signing credential onto your YubiKey

If you are producing Windows software that still needs to target platforms like Vista or XP, you might be saying: "That's all very well, but what if I need to sign my software with both an SHA-1 and SHA-256 Authenticode credential? There's only one Digital Signature slot on the YubiKey after all..."

Well, the thing is, this is one of the exact issues I have been faced with for Rufus, and I can tell you that, as far as code signing is concerned, the labels assigned to the certificate/credential storage slots are pretty much irrelevant. You can use any of these 4 slots to store any code signing credential you want (since they are referenced by their fingerprint), and we only used the "Digital Signature" PIV slot because that's the one that makes most sense for storing a code signing signature. However, if you also want to store an SHA-1 credential, you can use any of the remaining slots to do that.

My preference is to use the optional "Card Authentication" slot to store your extra SHA-1 credential (so that you can use the "Authentication" and "Key Management" for actual authentication or key management if you ever need to). At least, this is what I have been doing for double-signing my Rufus application, and neither SignTool or the YubiKey seem to have any trouble with that.

Using the stored credentials with SignTool


Okay, so you have your code signing credential(s) safely stored on a secure YubiKey. Now what?
Clearly you can't use SignTool in the usual fashion, where you reference a local .p12 or .pfx file.

Instead, and especially if you have multiple code signing credentials residing on it, because the YubiKey is automatically detected as a credentials storage device by Windows, what you want to do is reference your credentials by their unique SHA-1 fingerprint in SignTool, and let Windows/YubiKey handle the rest. This is exactly what the /sha1 flag of SignTool is for.

However, before we can do that, we need to figure out the SHA-1 fingerprint of your certificate.
The simplest way to do that, while ensuring that you are really going to be accessing the credentials that you want to access, is:

  1. Go back to PIV Manager, and open the slot where the credential you are after resides:

  2. Click "Export Certificate" and save the file as a .crt (you will need to type the extension as part of the file name)

  3. Double click on the .crt you just saved and go to the "Details" tab

  4. Scroll down to the "Thumbprint" field (should be the very last) and copy its content. This is the SHA-1 fingerprint you are after:
Now you can use SignTool with /sha1 instead of /f and when you do so, you will be prompted to plug your YubiKey (if it isn't plugged in) as well as your PIN, which, if you enter successfully, will enable the signature operation.

I'll conclude with a real life example, using a YubiKey 4 where I store both an SHA-256 code signing credential (fingerprint 5759b23dc8f45e9120a7317f306e5b6890b612f0) and an SHA-1 credential (fingerprint 655f6413a8f721e3286ace95025c9e0ea132a984), that I use to sign and timestamp the dual SHA-1+SHA-256 Rufus binary:

SignTool sign /v /sha1 655f6413a8f721e3286ace95025c9e0ea132a984 /fd SHA1 /tr http://sha256timestamp.ws.symantec.com/sha256/timestamp rufus.exe
SignTool sign /as /v /sha1 5759b23dc8f45e9120a7317f306e5b6890b612f0 /fd SHA256 /tr http://sha256timestamp.ws.symantec.com/sha256/timestamp rufus.exe

IMPORTANT NOTE: Do *NOT* let Windows install the Yubikey Minidriver as part of Windows Update!


It looks like the latest versions of Windows insist on installing a Yubikey Minidriver, which ends up wrecking havoc on your ability to actually use a Yubikey as a signing device. If you let Windows have its way, you may end up getting the a message stating The smart card cannot perform the requested operation or the operation requires a different smart card when attempting to sign your binary:



If you get this issue, just go to your installed software and delete the Yubikey Smart Card Minidriver:


Once you have done that, you should find that you can use your Yubikey for signing applications again.

Or, if you want to use the MiniDriver, you can follow the steps highlighted here.

Final words

Now that you've seen how to do it, I would strongly urge you to go and purchase a YubiKey (or any other FIPS 201 PIV device) and NEVER, EVER again store code signing credentials on anything else than a secure password protected device that was designed precisely for this.

This means that, once you have done all of the above and validated that it works, you should DELETE your .p12/.pfx and remove any trace of your credential(s) from your computer.

Of course, if you are really worried, you may still choose to store a copy of said credential(s), on a backup CD-ROM (preferably in a password protected archive), that you'll only store in a locked place. But by all means, if you have a working YubiKey, you should not let your code signing credential(s) anywhere near any of the computers that you own!