tag:blogger.com,1999:blog-83619429452219834532024-03-13T23:07:33.120+00:00Pete's Blog We're full of IT!Petehttp://www.blogger.com/profile/00656449482260202625noreply@blogger.comBlogger94125tag:blogger.com,1999:blog-8361942945221983453.post-61298437161562911392021-01-05T18:26:00.006+00:002021-01-06T01:04:30.157+00:00Python script to fix EDK2 patches downloaded with ThunderBird<p>It looks like ThunderBird and the EDK2 mailing list don't play too nice together, and you get annoying double line feeds being inserted into patches sent to the list, which are a major pain to deal with. And since I've grown tired of manually having to fix something like this:</p>
<pre class="brush: text">
Subject:
[edk2-platforms][PATCH 1/1] Platform/RaspberryPi: Fix Linux kernel panic on reset/poweroff
From:
Pete Batard <pete@akeo.ie>
Date:
2021.01.05, 14:09
To:
devel@edk2.groups.io
Commit 94e9fba43d7e132be3c582c676968a7f408072c1 introduced an unconditional
call to PcdGet32 after we exit boot services, that produces a kernel panic
on Linux reset.
This addendum to the previous commit ensures that we only read the PCD and
apply the delay while we are still in UEFI, which is what we want anyway as
the goal was to fix the storage of NV variables set by the user from within
the UEFI firmware interface.
Signed-off-by: Pete Batard <pete@akeo.ie>
---
Platform/RaspberryPi/Library/ResetLib/ResetLib.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
index 4a50166dd63b..a70eee485ddf 100644
--- a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
+++ b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
@@ -52,13 +52,13 @@ LibResetSystem (
* Only if still in UEFI.
*/
EfiEventGroupSignal (&gRaspberryPiEventResetGuid);
- }
- Delay = PcdGet32 (PcdPlatformResetDelay);
- if (Delay != 0) {
- DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
- Delay / 1000000, (Delay % 1000000) / 100000));
- MicroSecondDelay (Delay);
+ Delay = PcdGet32 (PcdPlatformResetDelay);
+ if (Delay != 0) {
+ DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
+ Delay / 1000000, (Delay % 1000000) / 100000));
+ MicroSecondDelay (Delay);
+ }
}
DEBUG ((DEBUG_INFO, "Platform %a.\n",
(ResetType == EfiResetShutdown) ? "shutdown" : "reset"));
-- 2.29.2.windows.2
</pre>
Into this:
<pre class="brush: text">Subject: [edk2-platforms][PATCH 1/1] Platform/RaspberryPi: Fix Linux kernel panic on reset/poweroff
From: Pete Batard <pete@akeo.ie>
Date: 2021.01.05, 14:09
To: devel@edk2.groups.io
Commit 94e9fba43d7e132be3c582c676968a7f408072c1 introduced an unconditional
call to PcdGet32 after we exit boot services, that produces a kernel panic
on Linux reset.
This addendum to the previous commit ensures that we only read the PCD and
apply the delay while we are still in UEFI, which is what we want anyway as
the goal was to fix the storage of NV variables set by the user from within
the UEFI firmware interface.
Signed-off-by: Pete Batard <pete@akeo.ie>
---
Platform/RaspberryPi/Library/ResetLib/ResetLib.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
index 4a50166dd63b..a70eee485ddf 100644
--- a/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
+++ b/Platform/RaspberryPi/Library/ResetLib/ResetLib.c
@@ -52,13 +52,13 @@ LibResetSystem (
* Only if still in UEFI.
*/
EfiEventGroupSignal (&gRaspberryPiEventResetGuid);
- }
- Delay = PcdGet32 (PcdPlatformResetDelay);
- if (Delay != 0) {
- DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
- Delay / 1000000, (Delay % 1000000) / 100000));
- MicroSecondDelay (Delay);
+ Delay = PcdGet32 (PcdPlatformResetDelay);
+ if (Delay != 0) {
+ DEBUG ((DEBUG_INFO, "Platform will be reset in %d.%d seconds...\n",
+ Delay / 1000000, (Delay % 1000000) / 100000));
+ MicroSecondDelay (Delay);
+ }
}
DEBUG ((DEBUG_INFO, "Platform %a.\n",
(ResetType == EfiResetShutdown) ? "shutdown" : "reset"));
-- 2.29.2.windows.2
</pre>
<p>Here's a quick Python script that'll automate that for you:</p>
<pre class="brush: python">
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('files', type=argparse.FileType('rb+'), nargs='+')
args = parser.parse_args()
for file in args.files:
buffer = bytearray(file.read())
# Delete initial empty line
while (buffer[0] == 0x0d) or (buffer[0] == 0x0a):
del buffer[0]
# Un-split Subject: CC: etc.
for i in range(buffer.find(b'\x0d\x0a---')):
if (buffer[i] == 0x3a) and (buffer[i+1] == 0x0d) and (buffer[i+2] == 0x0a):
del buffer[i+1]
buffer[i+1] = 0x20
# Remove double CRLF from chunks
i = buffer.find(b'\x0d\x0a@@')
while i < len(buffer) - 3:
if (buffer[i] == 0x0d) and (buffer[i+1] == 0x0a) and (buffer[i+2] == 0x0d) and (buffer[i+3] == 0x0a):
del buffer[i]
del buffer[i]
i = i + 1
file.seek(0)
file.write(buffer)
file.truncate()
</pre>Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-65232223367924553692020-12-16T19:28:00.006+00:002020-12-16T19:30:17.536+00:00UEFI Hexdump <p>If you're developing UEFI firmware content, sooner or later you're going to want to dump binary data using the debug facility.</p><p>And so, without further ado:<br /></p><pre class="brush: cpp">(...)
#include <Library/BaseLib.h>
#include <Library/PrintLib.h>
(...)
STATIC
VOID
DumpBufferHex (
VOID* Buf,
UINTN Size
)
{
UINT8* Buffer = (UINT8*)Buf;
UINTN i, j, k;
char Line[80] = "";
for (i = 0; i < Size; i += 16) {
if (i != 0) {
DEBUG ((DEBUG_INFO, "%a\n", Line));
}
Line[0] = 0;
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " %08x ", i);
for (j = 0, k = 0; k < 16; j++, k++) {
if (i + j < Size) {
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "%02x", Buffer[i + j]);
} else {
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " ");
}
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " ");
}
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), " ");
for (j = 0, k = 0; k < 16; j++, k++) {
if (i + j < Size) {
if ((Buffer[i + j] < 32) || (Buffer[ i + j] > 126)) {
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), ".");
} else {
AsciiSPrint (&Line[AsciiStrLen (Line)], 80 - AsciiStrLen (Line), "%c", Buffer[i + j]);
}
}
}
}
DEBUG ((DEBUG_INFO, "%a\n", Line));
}</pre>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-50419197312008339882020-08-08T17:21:00.005+01:002020-11-12T19:49:22.120+00:00Updating XML files with PowerShell<p>Say you have the following <code>file.xml</code>:</p>
<pre class="brush: xml"><?xml version="1.0" encoding="UTF-8"?>
<data>
<item name="Item 1" id="0" />
<item name="Item 2" id="1001" />
<item name="Item 3" id="0" />
<item name="Item 4" id="1002" />
<item name="Item 5" id="1005" />
<item name="Item 6" id="0" />
</data></pre>
<p></p>And you want to replace all those <code>"0"</code> <code>id</code> attributes with incremental values.<p>If you have PowerShell, this can be accomplished pretty easily with the following commands:<br /></p><pre class="brush: ps">$xml = New-Object xml
$xml.Load("$PWD\file.xml")
$i = 2001; foreach ($item in $xml.data.item) { if ($item.id -eq 0) { $item.id = [string]$i; $i++ } }
$xml.Save("$PWD\updated.xml")
</pre>
<p>Now your output (<code>updated.xml</code>) looks like:</p><pre class="brush: xml"><?xml version="1.0" encoding="UTF-8"?>
<data>
<item name="Item 1" id="2001" />
<item name="Item 2" id="1001" />
<item name="Item 3" id="2002" />
<item name="Item 4" id="1002" />
<item name="Item 5" id="1005" />
<item name="Item 6" id="2003" />
</data></pre>
<p>Easy-peasy...<br /></p>Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com2tag:blogger.com,1999:blog-8361942945221983453.post-82643987646094247862020-07-08T18:05:00.028+01:002020-07-20T21:48:55.298+01:00(Ab)using Microsoft's symbol servers, for fun and profit<div>Since I find myself doing this on regular basis (<i>Hail Ghidra!</i>), and can never quite remember the commands.</div><div><br /></div><div>Say you have a little Microsoft executable, such as the latest ARM64 version of <code>usbxhci.sys</code>, that you want to investigate using <a href="https://ghidra-sre.org/" target="_blank">Ghidra</a>.</div><div><br /></div><div>Of course, one thing that can make the whole difference between hours of <i>"Where the heck is the function call for the code I am after?"</i> and a few of seconds of <i>"Judging by its name, this call is the most likely candidate"</i> is the availability of the <code>.pdb</code> debug symbols for the Windows executable you are analysing.</div><div><br /></div><div>You may also know that, because of the huge corporate ecosystem they have where such information might be critical (as well as some government pressure to make it public), it so happens that Microsoft does make available a lot of the debug information that was generated during the compilation of Windows components. Now, since it can amount to a large volume of data (one can usually expect a <code>.pdb</code> to be 3 to 5 times larger than the resulting code) this debug information is not usually provided with Windows, unless you are running a Debug/Checked build.</div><div><br /></div><div> But it can "easily" be retrieved from Microsoft's servers. Here's how.</div><div><br /></div><div>First of all, you need to ensure that you have the Windows SDK or Windows Driver Kit installed. If you have Visual Studio 2019 (remember, the <a href="https://visualstudio.microsoft.com/vs/community/" target="_blank">Community Edition of VS2019 is free</a>) with the C++ development environment, these should already have been installed for you. But really it's up to you to sort that out and alter the paths below as needed.</div><div><br /></div><div>With this prerequisite taken care of, you should find a commandline executable called <code>symchk.exe</code> somewhere in <code>C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\</code>. This is the utility that can connect to the Microsoft's servers to fetch the symbol files, i.e. the compilation <code>.pdb</code>'s that Microsoft has made public.</div><div><br /></div><div>So, let's say we have copied our ARM64 xHCI driver (<code>USBXHCI.SYS</code> - Why Microsoft suddenly decided to YELL ITS NAME is unknown) to some directory. All you need to do to retrieve its associated <code>.pdb</code> then is issue the command:</div><div><br /></div><div><pre class="brush: text">"C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\symchk.exe" /s srv*https://msdl.microsoft.com/download/symbols /ocx .\ USBXHCI.SYS</pre></div><div><br /></div><div>The <code>/s</code> flag indicates where the symbols should be retrieved from (here the Microsoft's remote server) and the <code>/ocx</code> flag, followed by a folder, indicates where the <code>.pdb</code> should be copied (here, the same directory as the one where we have our driver).</div><div><br /></div><div>If everything goes well, the output of the command should be:</div><div><br /></div><div><pre class="brush: text">SYMCHK: FAILED files = 0
SYMCHK: PASSED + IGNORED files = 1</pre></div><div><br /></div><div>with the important part being that the number of <code>PASSED files</code> is not zero, and you should find a newly created <code>usbxhci.pdb</code> in your directory. Neat!</div><div><br /></div><div><h3 style="text-align: left;"><i>"Hello, my name is Mr Snrub"</i><br /></h3></div><div><br /></div><div>So, what do you do with that?</div><div><br /></div><div>Well, I did mention Ghidra, and as a comprehensive disassembly/decompiler utility, Ghidra does of course have the ability to work with debug symbols if they happen to be available (sadly, it doesn't seem to have the ability to look them up automatically like IDA, or if it does, I haven't found where this can be configured), which helps turn an obtuse <code>FUN_1c003ac90()</code> function name, into a much more indicative <code>XilRegister_ReadUlong64()</code>...</div><div><br /></div><div>For instance, let's say you happen to have been made aware that the reason why you currently can't use the rear USB-A ports for Windows 10 on the Raspberry Pi 4 is because Broadcom/VIA (most likely Broadcom, because they've already done everyone a number with implementing a DMA controller that chokes past 3 GB on the Bcm2711) have screwed up 64-bit PCIe accesses, and they end up returning garbage in the high 32-bit DWORD unless you only ever attempt to read 64-bit QWORDs as two sequential DWORDs instead of a single QWORD.</div><div><br /></div><div>As a result of this, you may be exceedingly interested to find out if there exists something in the function calls used by Microsoft's <code>usbxhci.sys</code> driver, that can set 64-bit xHCI register accesses to be enacted as two 32-bit ones.</div><div><br /></div><div>Obviously then, if, after using the <code>.pdb</code> we've just retrieved above, Ghidra helpfully tells you that there does exist a function call at address <code>1c003ac90</code> called <code>XilRegister_ReadUlong64</code>, you are going to be <b>exceedingly</b> interested in having a look at that call:</div><div><br /></div>
<div><pre class="brush: cpp">undefined8 XilRegister_ReadUlong64(longlong param_1,undefined8 *param_2)
{
undefined8 local_30 [6];
local_30[0] = 0;
if (*(char *)(*(longlong *)(param_1 + 8) + 0x219) == '\0') {
DataSynchronizationBarrier(3,3);
if ((*(ulonglong *)(*(longlong *)(param_1 + 8) + 0x150) & 1) == 0) {
// 64-bit qword access
local_30[0] = *param_2;
} else {
DataSynchronizationBarrier(3,3);
// 2x32-bit dword access
local_30[0] = CONCAT44(*(undefined4 *)((longlong)param_2 + 4),*(undefined4 *)param_2);
}
} else {
Register_ReadSecureMmio(param_1,param_2,3,1,local_30);
}
return local_30[0];
}</pre></div><div><br /></div><div>NB: The comments were not added by Ghidra. Ghidra may be good at what it does, but it's not <i>that</i> good...</div><div><br /></div><div>Guess what? It so happens that there exists an attribute somewhere, that Microsoft uses the bit 0 of, to decide whether 64-bit xHCI registers should be read using two 32-bit access. Awesome, this looks exactly like what we're after.</div><div><br /></div><div>The corresponding disassembly also tells us that this <code>if</code> condition is ultimately encoded as a <code>tbnz</code> ARM64 instruction. So if we revert that logic, by using a <code>tbz</code> instead of <code>tbnz</code>, we should be able to force the failing 64-bit reads to be enacted as 2x32-bit, which may fix our xHCI driver woes...</div><div><br /></div><div>Let's do just that then, by editing <code>USBXHCI.SYS</code> and changing the <code>EA 00 00 37</code> sequence at address <code>0x03a0d0</code> to <code>EA 00 00 36</code> (<code>tbnz</code> → <code>tbz</code>) and, for good measure, do the same for <code>XilRegister_WriteUlong64</code> at address <code>0x005b34</code>, by also changing <code>0A 01 00 37</code> into <code>0A 01 00 36</code> to <b>reverse</b> the logic. <i>"Yes that'll do".</i></div><div><i><br /></i></div><div style="text-align: left;"><h3></h3></div><div style="text-align: left;"><h3><i>"I like the way Snrub thinks!"</i></h3></div><div><br /></div><div></div><div>Well, we may have patched our driver and tried to fool the system by <b>reversing</b> some stuff, but, <a href="https://en.wikipedia.org/wiki/Marge_vs._the_Monorail" target="_blank">as <i>the Simpsons</i> have long attested</a>, it's not going to do much unless you have a trusted sidekick to help you out.</div><div><br /></div><div>Obviously, since we broke the signature of that driver the minute we changed even a single bit, we're going to have to tell Windows to disable signature enforcement for the boot on our target ARM64 platform, which can be done by setting <code>nointegritychecks on</code> in the BCD. And while we're at it we may want to enable test signing as well. Now, most of the commands you'll see are for the local BCD, but that's not what we are after here, since we want to modify a USB installed version of Windows, where, in our case, the BCD is located at <code>S:\EFI\Microsoft\Boot\BCD</code>. So the trick to achieving that (from a command prompt running elevated) is:</div><div><br /></div><div><pre class="brush: text">bcdedit /store S:\EFI\Microsoft\Boot\BCD /set {default} testsigning on
bcdedit /store S:\EFI\Microsoft\Boot\BCD /set {default} nointegritychecks on</pre></div><div><br /></div>
<div>However, if you only do that and (after taking ownership and granting yourself full permissions so that you can replace the existing driver) copy the altered <code>USBXHCI.SYS</code> to <code>Windows\System32\drivers\</code> you will still be greeted by an obnoxious</div><div><br /></div>
<div><pre class="brush: text">Recovery
Your PC/Device needs to be repaired
The operating system couldn't be loaded because a critical system driver is missing or contains errors.
File: \Windows\System32\drivers\USBXHCI.SYS
Error code: 0xc0000221</pre></div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-U8V7cy3Tgeo/XwX8QQT6YFI/AAAAAAAACmA/sS2-LzTVUGwcinQcUs1FZKvx3jdrRXs9ACK4BGAsYHg/s2496/SANY4791.JPG" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1408" data-original-width="2496" height="362" src="https://1.bp.blogspot.com/-U8V7cy3Tgeo/XwX8QQT6YFI/AAAAAAAACmA/sS2-LzTVUGwcinQcUs1FZKvx3jdrRXs9ACK4BGAsYHg/w640-h362/SANY4791.JPG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Oh noes!<br /></td></tr></tbody></table><div><br /></div><div><br /></div>
<div>The problem, which is what generates the <code>0xc0000221</code> (<code>STATUS_IMAGE_CHECKSUM_MISMATCH</code>) error code, is that the optional PE checksum field, used by Windows to validate critical boot executables, has not been updated after we altered <code>USBXHCI.SYS</code>. Therefore checksum validation fails, and this is precisely what the Windows boot process is complaining about.</div><div><br /></div><div>Fixing this is very simple: Just download <code>PEChecksum64.exe</code> (e.g. from <a href="https://download.cnet.com/PEChecksum-64-bit/3000-2094_4-75312518.html" target="_blank">here</a>) and issue the command:</div><div><br /></div>
<div><pre class="brush: text">D:\Dis\>PEChecksum64.exe USBXHCI.SYS
USBXHCI.SYS: Checksum updated from 0x0008D39B to 0x0008D19B</pre></div><div><br /></div><div>For good measure, you will also need to self-sign that driver, so that you can avoid Windows booting into recovery mode with an obnoxious <code>0xc000000f</code> from <code>winload.exe</code> (though you can still proceed to full boot from there).<br /></div><div><br /></div><div>Now we finally have all the pieces we need.</div><div><br /></div><div>For instance, we can replace <code>USBXHCI.SYS</code> on a <b>fast</b> USB 3.0 flash drive containing an Raspberry Pi Windows 10 ARM64 installation created using <a href="https://www.worproject.ml/downloads" target="_blank">WOR</a> (and if you happen to have the latest EEPROM flashed as well as a version of the Raspberry Pi 4 UEFI firmware that includes <a href="https://edk2.groups.io/g/devel/message/61814" target="_blank">this patch</a>, you can actually boot the whole thing straight from USB), and, while we are at it, remove the 1 GB RAM limit that the Pi 4 had to have when booting from USB-C port (since we're not going to use that USB controller), by issuing, from an elevated prompt:</div><div><br /></div>
<div><pre class="brush: text">bcdedit /store Y:\EFI\Microsoft\Boot\BCD /deletevalue {default} truncatememory</pre></div><div><br /></div>
<div>Do all of the above and, who knows, you might actually end up with a usable Windows 10 ARM64 OS, running from one of the rear panel's <i>fast</i> USB 3.0 ports with a whooping 3 GB of RAM, on your Raspberry Pi 4.</div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-expkEVlOooU/XwXtqcICwJI/AAAAAAAAClk/DeWsWe4MIIMlEfmabfVFc9iEl6AE8YNmACK4BGAsYHg/s1029/Untitled2.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="701" data-original-width="1029" height="436" src="https://1.bp.blogspot.com/-expkEVlOooU/XwXtqcICwJI/AAAAAAAAClk/DeWsWe4MIIMlEfmabfVFc9iEl6AE8YNmACK4BGAsYHg/w640-h436/Untitled2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Now, isn't that something?<br /></td></tr></tbody></table><div><br /></div><div>But this is just a post about using Microsoft's symbol servers.</div><div> It's not a post about running full blown Windows 10 on the Raspberry Pi 4, right?</div><div><br /></div><div><b>Addendum:</b> In case you don't want to have to go through the taking of ownership, patching, updating of the PE checksum and digitally re-signing of the file yourself, may I also interest you in <a href="https://github.com/pbatard/winpatch" target="_blank">winpatch</a>?</div>Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com1tag:blogger.com,1999:blog-8361942945221983453.post-90284981133017363772020-06-23T19:08:00.143+01:002022-06-26T20:33:08.937+01:00Et tu, Microsoft<div>It's a beautiful Saturday afternoon.</div><div><br /></div><div> Everything is going as peachy as could be, with the satisfaction of having released a new version of your software, just a couple days ago, that wasn't short lived due to the all too common subsequent realisation that you managed to introduce a massive "oops", such as <a href="https://github.com/pbatard/rufus/issues/1213" target="_blank">including completely wrong drivers for a specific architecture</a> (courtesy of Rufus 3.10) or <a href="https://github.com/pbatard/rufus/issues/1396#issuecomment-584933023" target="_blank">having your ext formatting feature break when a partition is larger than 4 GB</a> (courtesy of Rufus 3.8)... Sometimes I have to wonder if Rufus isn't suffering from the same <a href="https://screenrant.com/star-trek-movies-odd-number-curse-explained/" target="_blank">curse as the original Star Trek movie releases</a> (albeit inverted in our case).</div><div><br /></div><div>Thus, basking in the contentment of a job well done, you fire up your trusty Windows 10, which you upgraded to the 2004 release just a couple weeks ago (along with that Storage Space array you use), and go on your merry way, doing inconsequential Windows stuff, such as deciding to rename one folder.</div><div><br /></div><div><b>And that's when all hell breaks lose...</b></div><div><br /></div><div>Suddenly, your file explorer freezes, every disk access becomes unresponsive and you are seeing your most important disk (the one that contains, among other things, all the ISOs you accumulated for testing with <a href="https://github.com/pbatard/rufus" target="_blank">Rufus</a>, and which you made sure to set up with redundancy using Storage Spaces, along with an <a href="https://en.wikipedia.org/wiki/ReFS" target="_blank">ReFS</a> file system <a href="https://blog.habets.se/2017/08/ReFS-integrity-is-not-on-by-default.html" target="_blank">where file integrity had been enabled</a>) undergoing constant access, with no application in sight seemingly performing those...</div><div><br /></div><div>Oh and rebooting (provided you are patient enough to wait the 10 minutes it takes to actually reboot) doesn't help in the slightest. If anything, it appears to make the situation worse as Windows now takes forever to boot, with the constant disk access issue of your Storage Space drive still in full swing.</div><div><br /></div><div>Yet the Storage Spaces control panel reports that all of the underlying HDDs are fine, a short <a href="https://en.wikipedia.org/wiki/S.M.A.R.T." target="_blank">SMART</a> test on those also reports no issue and even a desperate attempt to try to identify what specific drive might be the source of the trouble, by trying each combination of 3 our 4 HDDs, yields nothing. If nothing else, it would confirm the idea that Microsoft did a <i>relatively</i> solid job with Storage Spaces, at least in terms of hardware gotchas, considering that every other parity solution I know of, such as the often decried Intel RAID, would scream bloody murder if you removed another drive before it got through the super time consuming rebuilding of the whole array (which is the <b>precise</b> reason I swore off using Intel RAID and moved to Storage Spaces).<br /></div><div><br /></div><div>An <a href="https://en.wikipedia.org/wiki/ReFS" target="_blank">ReFS</a> issue then? If that's the case, talk of a misnomer for something that's supposed to be resilient... <br /></div><div><br /></div><div>Indeed, the Event Viewer shows a flurry of ReFS errors, ultimately culminating in this ominous message, that gets repeated many times as the system attempts to access the drive, as you end up finding that your drive has been "remounted" as RAW:<br /></div>
<div><pre class="brush: text">Volume D: is formatted as ReFS but ReFS is unable to mount it;
ReFS encountered status The volume repair was not successful...</pre></div><div><br /></div>
<div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-wdBs86cH2-E/XvI5aUl_t-I/AAAAAAAACko/v5EXMbeRHVAP3zABrMYxd-WOy0UJw4VuQCK4BGAsYHg/s2264/Image1.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1034" data-original-width="2264" height="292" src="https://1.bp.blogspot.com/-wdBs86cH2-E/XvI5aUl_t-I/AAAAAAAACko/v5EXMbeRHVAP3zABrMYxd-WOy0UJw4VuQCK4BGAsYHg/w640-h292/Image1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Someone at Microsoft may want to look up the definition of resiliency...</i><br /></td></tr></tbody></table><div><br /></div><div><br /></div><div>Ugh, that's the second ReFS drive I lose in about a month (earlier was an SSD that hosted all my VMs, and that Windows mysteriously overwrote as a Microsoft Reserved Partition)! If that's indicative of a trend, I think that Microsoft might want to weather-test their data oriented solutions a little better. Things used to be rock-stable, but I can't say I've been impressed by Windows 10's prowess on the matter lately...</div><div><br /></div><div>And yes, I do have some backups of course (well, I didn't for those VMs, but that was data I could afford to lose) but they are spread all over the place on account that <i>I am not made of money, dammit!</i></div><div><br /></div><div>See, the whole point of entrusting my data to a 10 TB parity array made of 4x4 TB HDDs was that I could reuse drives that I (more or less) had lying around, and you'd better believe those were the <i>cheapest</i> 4 TB drives I'd been able to lay my hands on. In other words, Seagate, since HDD manufacturers have long decided, or, should I say, <i>colluded</i>, that they should stop trying to compete on price, as further evidenced by the fact that I still paid less for an 8 TB HDD, <b>two frigging years ago</b>, than the cheapest price I could find <b>for the exact same model today</b>.</div><div><br /></div><div><i>"Storage <u>is</u> getting cheaper"</i>, my ass!</div><div><br /></div><div> Oh and since we're talking about Seagate and reliability, I will also state that, in about 20 years of using almost exclusively Seagate drives, on account that they are constantly on the cheaper side (though Seagate and other manufacturers may want to explain why on earth it is cheaper to buy a USB HDD enclosure, with cable, PSU and SATA ↔ USB converter, than the same <b>bare</b> model of HDD), I have yet to experience a single drive failure for any Seagates I use in my active RAID arrays.</div><div><br /></div><div>So when people say Seagate is too unreliable, I beg to <i>anecdotally</i> differ since, for the price, Seagate's more than reliable enough. I mean, between paying exactly 0 € for 10 TB with parity vs. between 500 to 700 € (current price, <b>at best</b>) for a parity or mirrored NAS array, there's really no contest. I don't mind that a lot of people appear to have semi-bottomless pockets, and can't see themselves go with less than a mirroring solution with brand new NAS drives. But that's no reason
to look down on people who do use parity along with cheap non NAS drives, because price is far from being
an inconsequential factor when it comes to the preservation of their
data...</div><div><br /></div><div>And it's even more true here as the issue at hand has <b>nothing</b> to do with using cheap hardware and that everyone knows that a parity or mirroring solution is worth nothing if you don't also combine it with offline backups, which means even more disks, preferably of large capacity, and therefore even more budget to provision...<br /></div><div><br /></div><div>All this to say that there's a good reason why I don't have a single 8 or 10 TB HDD lying around, with all my backups for the array that went offline, and why, as much as I wish otherwise, there are going to be gaps in the data I restore... So yeah, count me less than thrilled with a data loss that wasn't incurred by a hardware failure or my own carelessness (the only ever two valid causes for losing data).</div><div><br /></div><div>Alas, with the Windows 10 2004 feature update, it appears that the good folks at Microsoft decided that there just weren't enough ways in which people could kill their data. So they created a brand new one.</div><div><br /></div><div>Enters <a href="https://support.microsoft.com/en-gb/help/4570719/workaround-and-recovery-steps-for-issue-with-some-parity-storage-space" target="_blank">KB4570719</a>.<br /></div><div><br /></div><div>The worst part of it is that I've seen reports indicating that this, as well as other corollary issues, was pointed out to Microsoft by Windows Insiders as far back as September 2019. So why on earth was something that should instantly have been flagged as a <b>super critical data loss issue</b>, included in the May 2020 update?</div><div><br /></div><div> Oh and of course, at the time of this post, i.e. about one month after the data-destructive Windows update was released, there's still no solution in sight... though, from what I have found, non extensible parity Storage Spaces may be okay to use, as long as these were created using PowerShell commands to make them non dynamically extensible, rather than through the UI which forces extensible.<br /></div><div><br /></div><div><br /></div><div>If this post seems like a rant, it's because it mostly is, considering that I am less than thrilled at having had to waste one week trying to salvage what I could of my data. But since we need to conclude this little story, let me impart the following two truths upon you:<br /></div><div><br /></div><div>1. EVERYTHING, and I do mean EVERYTHING is actively trying to murder your data.<br /> Do not trust the hardware. Do not trust yourself. And especially, do not trust the Operating System not to lounge a sharp blade straight through your data's toga, during the <a href="https://en.wikipedia.org/wiki/Ides_of_March" target="_blank"><i>Ides of June</i></a>.</div><div><br /></div><div>2. (Since this is something I am all too commonly facing with Rufus' user reports) It doesn't matter how large and well established a software company is compared to an Independent Software Developer; the OS can still very much be the one and only reason why third party software appears to be failing, and you should always be careful never to consider the OS above suspicion. There is no more truth to <i>"surely a Microsoft (or an Apple or a Google for that matter) would not to ship an OS that contains glaring bugs"</i> today as there has been in the past, or as there will be in the future.<br /> The OS can and does fail spectacularly at times (and I have plenty more examples besides this one, that I could provide). So don't fail to account for that possibility.<br /></div>Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com2tag:blogger.com,1999:blog-8361942945221983453.post-30887034562232776112020-05-15T20:53:00.010+01:002020-07-03T18:27:28.837+01:00Why is my Samba connection failing?<div>Or how nice it is to have a problem that has long eluded you finally explained.</div><div><br /></div><div><h3 style="text-align: left;">Part 1: The horror</h3></div>
<div>You see, I've been using a <a href="https://www.friendlyarm.com/index.php?route=product/product&product_id=225" target="_blank">FriendlyArm/FriendlyElec RK3399-based NanoPC-T4</a> to run Linux services, such as a staging web server for Rufus, network print host, various other things as well as a <a href="https://en.wikipedia.org/wiki/Samba_(software)" target="_blank">Samba File Server</a>...</div><br />
<div>However, this Samba functionality seemed to be plagued like there was no tomorrow: Almost every time I tried to fetch a large file from it, Windows would freeze during transfer, with no recovery and not even the possibility of cancelling unless the Samba service was restarted manually on the server.</div><br />
<div>But what was more vexing is that these problems with Samba did not manifest themselves until I switched from using an old Lubuntu distribution, that was provided by the manufacturer of that device, to a more up to date Armbian. With Lubuntu, Samba seemed rock-solid, but with Armbian, it was hopeless.</div><br /><br />
<div>This became so infuriating that I had to giveup on using Samba on that machine altogether and, considering that things usually seemed to be okay-ish after the service had restarted, I dismissed it as a pure Samba/arch64 bug, that newer versions of Samba or Debian had triggered, and that would eventually get fixed. But of course, that long awaited fix never seemed to manifest itself and I had better things to do than invest time I didn't have trying to troubleshoot a functionality that wasn't that critical to my workflow.</div><br /><br />
<div>Besides, the Samba logs were all but useless. Nothing in there seemed to provide any indication that Samba was even remotely unhappy. And of course, you can forget about Windows giving you any clue about why the heck your Samba file transfers are freezing...</div><br />
<div><h3 style="text-align: left;">Part 2: The light at the end of the tunnel</h3></div>
<div>Recently however, in the course of the Raspberry Pi 4 UEFI firmware experiments, it turns out that I was using that same server to test UEFI HTTP boot of a large (900 MB) ISO, that was being served from the Apache server running on that NanoPC machine, and had no joy with getting the full transfer complete either. Except, there, it wasn't freezing. It just seemed to produce a bunch of <code>TcpInput: received a checksum error packet</code> before giving up on the transfer altogether...</div><br />
<div><pre class="brush: text">URI: http://10.0.0.7/~efi/ubuntu.iso
File Size: 916357120 Bytes
Downloading...1%<br />TcpInput: received a checksum error packet
TcpInput: Discard a packet
TcpInput: received a checksum error packet
TcpInput: Discard a packet
TcpInput: received a checksum error packet
TcpInput: Discard a packet
TcpInput: received a checksum error packet
TcpInput: Discard a packet
TcpInput: received a checksum error packet
TcpInput: Discard a packet
TcpInput: received a checksum error packet
TcpInput: Discard a packet
HttpTcpReceiveNotifyDpc: Aborted!
Error: Server response timeout.</pre></div><br />
<div>Yet, serving the same content from the native python3 HTTP server (<code>python3 -m http.server 80</code>, which is a super convenient command to know as it acts as an HTTP server and serves any content from the current directory through the specified port) appeared to be okay, albeit with the occasional checksum errors. This is suddenly starting to look like a lot of compounded network errors... Could this be related to that Samba issue?</div><br /><br />
<div>Now, the first thing you do when you get reports of TCP checksum errors, is try a different cable, a different switch and so on, to make sure that this is not a pure hardware problem. But I had of course tried that during the process of trying to troubleshoot the failing Samba server, and, once again, the results of switching equipment and cabling around were all negative.</div><br />
<div>But at least a bunch of checksum errors does give you something to start to work with.</div><div><br /></div><div>For one thing, you can monitor these errors with tcpdump (<code>tcpdump -i eth0 -vvv tcp | grep incorrect</code>) and, more importantly, you may find some <a href="https://forum.armbian.com/topic/13544-nanopi-m4-and-other-rk3399-need-tcpudp-offloading-disabled/" target="_blank">very relevant articles</a> that point you to the very root of the problem.</div><br />
<div>Long story short, if <code> tcpdump -i eth0 -vvv tcp | grep incorrect</code> produces loads of checksum errors on the platform you serve content
from, you may want to look into disabling offloading from the network adapter with something like:</div><br />
<div><pre class="brush: text">ethtool -K eth0 rx off tx off</pre></div><br />
<div>Or you may continue to hope that the makers of your distro will take action, but that might just turn out to be wishful thinking...</div>Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com4tag:blogger.com,1999:blog-8361942945221983453.post-69249181314171704382019-11-17T17:51:00.000+00:002019-12-14T22:26:42.814+00:00PowerShell script to Convert UTF-8 misinterpreted file namesYou'd think that somebody else would have come up with a quick script to do just that on Windows, but it looks like nobody else bothered, so here goes.<br />
<br />
Here's the deal: You copied a bunch of files, and somewhere along the way, one of the applications screwed up and did not produce actual Unicode file names but instead misinterpreted the UTF-8 sequences as CodePage 1252, resulting in something dreadful like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-ZFrHrF_DYYQ/XdGNgDjlrSI/AAAAAAAACdk/1cv54HygoFsQoUREf8LN0RxgX6gKBxOsACLcBGAsYHQ/s1600/Image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="217" data-original-width="831" height="166" src="https://1.bp.blogspot.com/-ZFrHrF_DYYQ/XdGNgDjlrSI/AAAAAAAACdk/1cv54HygoFsQoUREf8LN0RxgX6gKBxOsACLcBGAsYHQ/s640/Image1.png" width="640" /></a></div>
<br />
And now you'd like to have a quick way to convert the 1252-interpreted UTF-8 to actual UTF-8. So you look around thinking that, surely, someone must have done something to sort this annoyance, but the only thing you can find is a UNIX perl script called <code>convmv</code>, which isn't really helpful. Why hasn't anyone crafted a quick PowerShell script to do the same on Windows already?<br />
<br />
Well, it turns out that, because of PowerShell's limitations, and Windows' getting in the way of enacting a <b>proper</b> conversion of 1252 to UTF-8, producing such a script is actually a minor pain in the ass. Still, now, someone has produced such a thing:<br />
<pre class="brush: ps">#region Parameters
param(
# (Optional) The directory
[string]$Dir = "."
)
#endregion
# You'll need to have your console set to CP 65001 AND use NSimSun as your
# font if you want any hope of displaying CJK characters in your console...
[Console]::OutputEncoding = [System.Text.Encoding]::UTF8
$files = Get-ChildItem -File -Path $Dir -Recurse -Name
foreach ($f in $files) {
$bytes = [System.Text.Encoding]::GetEncoding(1252).GetBytes($f)
$nf = [io.path]::GetFileName([System.Text.Encoding]::UTF8.GetString($bytes))
Write-Host "$f" → "$nf" # [$hex]
# Must use -LiteralPath else files that contain '[' or ']' in their name produce an error
Rename-Item -LiteralPath "$f" -NewName "$nf"
}
# Produce a "Press any key" message when ran with right click
$auxRegKey='\SOFTWARE\Classes\Microsoft.PowerShellScript.1\Shell\0\Command'
$auxRegVal=(get-itemproperty -literalpath HKLM:$auxRegKey).'(default)'
$auxRegCmd=$auxRegVal.Split(' ',3)[2].Replace('%1', $MyInvocation.MyCommand.Definition)
if ("`"$($myinvocation.Line)`"" -eq $auxRegCmd) {
Write-Host "`nPress any key to exit..."
$null = $Host.UI.RawUI.ReadKey('NoEcho,IncludeKeyDown')
}</pre>
<br />
If you save this script to something like <code>utf8_rename.ps1</code> in the top directory where you have your misconverted files, and then use <i>Run with PowerShell</i> in the explorer's context menu, you should then see some output like this (provided your console is set to codepage 65001, a.k.a. UTF-8 <b>and</b> that you select a font that actually supports CJK characters, such as <i>NSimSun</i> (Microsoft will really have to explain how they have no trouble displaying CJK with <i>NSimSun</i> but still can't seem/want to do it with <i>Lucida Console</i>):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-1rnO_YC2QY4/XdF_b7LSo3I/AAAAAAAACdU/Q1jPGkPbo9Y-RAVbnkIutWPPaizw2JkuACEwYBhgL/s1600/Image2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="282" data-original-width="1600" height="112" src="https://1.bp.blogspot.com/-1rnO_YC2QY4/XdF_b7LSo3I/AAAAAAAACdU/Q1jPGkPbo9Y-RAVbnkIutWPPaizw2JkuACEwYBhgL/s640/Image2.png" width="640" /></a></div>
<br />
Eventually, your file names should have been converted to their expected value, and all will be well:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-zDLpF-Wbq_M/XdGNgIL71XI/AAAAAAAACdg/pAha5rELloQL3FpALsLjdBFfgpbKuXTRQCEwYBhgL/s1600/Image3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="216" data-original-width="834" height="164" src="https://1.bp.blogspot.com/-zDLpF-Wbq_M/XdGNgIL71XI/AAAAAAAACdg/pAha5rELloQL3FpALsLjdBFfgpbKuXTRQCEwYBhgL/s640/Image3.png" width="640" /></a></div>
<br />
<br />
That is, until someone who thinks it's okay to <b>not properly support UTF-8 absolutely EVERYWHERE</b> (Hey Microsoft, how about some UTF-8 Win32 APIs already?) screws up and forces people to manually unscrew their codepage handling yet again...<br />
<br />
<h3>
Bonus</h3>
By the way if you're using Windows 10 19H1 or later, you should know that Microsoft finally added a setting to set the system codepage to UTF-8, which <b>seems</b> to finally improve on the failed codepage conversions that prompted the above script. Even as it says that it's in Beta, you may want to enable it:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-ieh39y8zebM/XdHOFULvqCI/AAAAAAAACd0/B_5-bgEY4MwtxP-Mn6C6iHn6vnCHNht1QCLcBGAsYHQ/s1600/Image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1268" data-original-width="910" height="640" src="https://1.bp.blogspot.com/-ieh39y8zebM/XdHOFULvqCI/AAAAAAAACd0/B_5-bgEY4MwtxP-Mn6C6iHn6vnCHNht1QCLcBGAsYHQ/s640/Image1.png" width="458" /></a></div>
<br />Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com1tag:blogger.com,1999:blog-8361942945221983453.post-35651397418844635602019-07-24T12:53:00.002+01:002020-10-05T16:29:02.640+01:00Installing Debian ARM64 on a Raspberry Pi 3 in UEFI mode<br />
That's right baby, we're talking <b>vanilla</b> Debian ARM64 (<b>not</b> Raspbian, <b>not</b> Armbian) in <b>pure</b> UEFI mode, where you'll get a GRUB UEFI prompt allowing you to change options at boot and everything.<br />
<br />
At long last, the Raspberry Pi can be used to install <b>vanilla</b> GNU/Linux distributions in the same manner as you can do on a UEFI PC. Isn't that nice?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-dGwiX9vSsWk/XVQ_pWwneQI/AAAAAAAACZw/YahnMs6cRlgHruDqwUS9zon2TXBEVSwYgCLcBGAs/s1600/debian.png"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://1.bp.blogspot.com/-dGwiX9vSsWk/XVQ_pWwneQI/AAAAAAAACZw/YahnMs6cRlgHruDqwUS9zon2TXBEVSwYgCLcBGAs/s640/debian.png" width="640" /></a></div>
<br />
Not that I don't like Raspbian or Armbian (as a matter of fact I am impressed by the very fine job the Armbian maintainers are doing with their distro), but I have now spent enough time helping with the <a href="https://github.com/tianocore/edk2-platforms/tree/master/Platform/RaspberryPi/RPi3">UEFI Raspberry Pi 3 effort</a> not to push this whole endeavour to its logical conclusion: Install vanilla ARM64 GNU/Linux distros. That's because, in terms of long term support and features, nothing beats a vanilla distro. I mean, what's the point of having an 64-bit CPU if the distro you're going to install forces you to use 32-bit?<br />
<br />
<h3>
Prerequisites</h3>
<h4>
Hardware:</h4>
<ul>
<li>A micro SD card with sufficient space (16 GB or more recommended). You may also manage with a USB Flash Drive, but this guide is geared primarily towards SD card installation</li>
</ul>
<ul>
<li>A Raspberry Pi 3 (Model B or Model B+) <b>with a proper power source</b>. If you're ever seeing a lightning bolt on the top left of your display during install, please invest into a power supply that can deliver more wattage.</li>
</ul>
Note that our goal here is to install the system on a SD card, through netinstall,
using a single media for the whole installation process.<br />
<br />
In other
words, there is no need to use an additional USB Flash Drive, as we could do, to
boot the Debian installer and then install from USB to SD. This is mostly
because it's inconvenient to have to use two drives when one can most certainly do, and also because while USB to SD may look
easier on paper (no need to fiddle with the "CD-ROM" device for instance) it's actually more difficult to complete properly.<br />
<br />
Thus, while I'll give you
some pointers on how to perform a USB based installation in Appendix D, I can also tell you, <b>from experience</b>, that you are better off not trying to use a separate USB as your installation media and instead performing the installation from
a single SD, as described in this guide. <br />
<br />
<h4>
Software:</h4>
<ul>
<li>The latest Debian ARM64 net install ISO.<br /><br /> You can download it from <a href="https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/">https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/</a><br /> (<code>debian-##.#.#-arm64-netinst.iso</code>, 250 MB).</li>
</ul>
<ul>
<li>The latest Raspberry Pi 3 UEFI firmware binary, along with the relevant Broadcom bootloader support files (i.e. <code>bootcode.bin</code>, <code>config.txt</code>, <code>fixup.dat</code>, <code>start.elf</code>).<br /><br /> You can find a ready-to-use archive with all of the above at <a href="https://github.com/pftf/RPi3/releases">https://github.com/pftf/RPi3/releases</a><br /> (<code>RPi3_UEFI_Firmware_v#.##.zip</code>, 3 MB).<br /><br /> Note that this firmware archive works for <b>both</b> the Raspberry Pi 3 Model B and the Raspberry Pi 3 Model B+ (as the relevant Device Tree is automatically selected during boot).</li>
</ul>
<ul>
<li>(Optional) The non-free WLAN firmware binaries that are needed if you want to use Wifi for the installation.<br />Note that, if you picked up the archive above then you don't need to do anything as the WLAN firmware binaries are included in it too.</li>
</ul>
<br />
<h3>
Preparation</h3>
<br />
Note: a complete example of how to achieve the first 3 steps below using <code>DISKPART</code> on Windows or <code>fdisk</code> + <code>mkfs</code> on Linux is provided in Appendix A at the end of this post.<br />
<ul>
<li>Partition your SD media as MBR and create a single partition of 300 MB of type <code>0x0e</code> (FAT16 with LBA).<br /><b>Do not</b> be tempted to use GPT as the partition scheme or <code>0xef</code> (ESP) for the partition type, as the ondie Broadcom bootloader does not support any of those. It <b>must</b> be MBR and type <code>0x0e</code>. You can use the command line utilities <code>fdisk</code> on Linux or <code>DISKPART</code> on Windows to do that.</li>
</ul>
<ul>
<li>Set the partition as active/bootable. This is <b>very important</b> as, otherwise, the Debian partition manager will not automatically detect it as ESP (EFI System Partition) which will create problems that you have to manually resolve (See Appendix C).<br /> If using <code>fdisk</code> on Linux, you can use <code>a</code> to set the partition as active.<br /> If using Windows, you can use <code>DISKPART</code> and then type the command <code>active</code> after selecting the relevant disk and partition.</li>
</ul>
<ul>
<li>Format the partition as FAT16. It <b>MUST</b> be FAT16 and not FAT32, as the Debian partition manager will not detect it as ESP otherwise and, again, you will have to perform extra steps to salvage your system before reboot (Appendix C).<br />The Linux and Windows base utilities should be smart enough to use FAT16 and not FAT32 for a 300 MB partition, so you should simply be able to use <code>mkfs.vfat /dev/<yourdevice></code> (Linux) or <code>format fs=fat quick</code> in Windows' <code>DISKPART</code>. The Windows Disk Manager should also be smart enough to use FAT16 instead of FAT32 if you decide to use it to format the partition.</li>
</ul>
<ul>
<li>Extract the UEFI bootloader support files mentioned above to the newly formatted FAT partition. If you downloaded the Raspberry Pi 3 UEFI firmware binary from the link above, you just have to uncompress the zip file onto the root of your media, and everything will be set as it should be.</li>
</ul>
<ul>
<li>Extract the content of the Debian ISO you downloaded to the root of the FAT partition. On Windows you can use a utility such as 7-zip to do just that (or you can mount the ISO in File Explorer then copy the files).</li>
</ul>
Once you have completed the steps above, eject your SD card, insert it in your Pi 3 and power it up. Make sure no other media is plugged in besides the SD card. Especially, make sure that there aren't any USB Flash Drives or USB HDDs connected.<br />
<br />
<h3>
Initial Boot</h3>
<br />
Unless you did something wrong, you should see the multicoloured boot screen, which indicates that the Raspberry Pi properly detected your SD media and is loading the low level CPU bootloader from it.<br />
<br />
Then you should see the black and white Raspberry logo, which indicates that the Raspberry Pi UEFI firmware is running.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-AlDyRWbJytc/XVRAEum-tbI/AAAAAAAACZ4/oloT--pw2CIYEGCDZVGYiciLK6mDx4b7wCLcBGAs/s1600/logo.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://1.bp.blogspot.com/-AlDyRWbJytc/XVRAEum-tbI/AAAAAAAACZ4/oloT--pw2CIYEGCDZVGYiciLK6mDx4b7wCLcBGAs/s640/logo.png" width="640" /></a></div>
<br />
<br />
Wait for the GNU GRUB menu to appear (which it should do by default after the Pi logo disappears) and choose <code>*Install</code> (which should already be the default) and let the Debian installer process start.<br />
<br />
<h3>
Debian Installer</h3>
<br />
<b>Note</b>: In case anything goes wrong during install, remember that you can use <code>Alt</code>-<code>F4</code> to check the current installation log for details about the error.<br />
<ul>
<li>Select your Language, Country and Keyboard and let the installer proceed until it reports that <code>No Common CD-ROM drive was detected.</code></li>
</ul>
<ul>
<li>At this stage, on <code>Load CD-ROM drivers from removable media</code> select <code>No</code>.</li>
</ul>
<ul>
<li>On <code>Manually select a CD-ROM module and device</code> select <code>Yes</code>.</li>
</ul>
<ul>
<li>On <code>Module needed for accessing the CD-ROM</code> select <code>none</code>.</li>
</ul>
<ul>
<li>On <code>Device file for accessing the CD-ROM</code> type exactly the following:<br /><br />
<pre class="brush: text">-t vfat -o rw /dev/mmcblk0p1</pre>
<br />
For the reasons why you need to type this, see Appendix B below.</li>
</ul>
<ul>
<li>With the "CD-ROM" device set, let the installation process proceed and retrieve the base packages from the media until it asks you for the non-free firmware files on the network hardware detection. If you plan to use the wired connection, you can skip the (Optional) step below.</li>
</ul>
<ul>
<li>(Optional) If you plan to use WLAN for the installation, choose <code>Yes</code> for <code>Load missing firmware from removable media</code>. If you created the media from that Raspberry Pi 3 firmware archive linked above, the relevant firmware files will be detected under the <code>firmware/</code> directory.<br /><br /><u>Note 1:</u> Because there are multiple files to load, <b>you will be prompted multiple times</b> for different firmware files (look closely at their names, you will see that they are actually different). This is normal. Just select <code>Yes</code> for each new file.<br /><br /><u>Note 2:</u> Though they are included in the UEFI firmware zip archive we linked above, it is most likely okay not to provide the <code>.clm_blob</code> if you don't have it (the Wifi drivers should work without that file), so don't be afraid to select <code>No</code> here if needed.</li>
</ul>
<ul>
<li>Set up your network as requested by the installer by (optionally) choosing the network interface you want to use for installation and (also optionally) setting up your access point and credentials if you use Wifi.</li>
</ul>
<ul>
<li>Go through the hostname, user/password set up and customize those as you see fit.</li>
</ul>
<ul>
<li>Let the installer continue until you get to the <code>Partition disks</code> screen. There, for <code>Partitioning method</code> select <code>Manual</code>. You <b>should</b> see something like this:<br /><br />
<pre class="brush: text">MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
#1 primary 314.6 MB B K ESP
pri/log FREE SPACE</pre>
<br />
If, instead, you see something like this:<br /><br />
<pre class="brush: text">MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
#1 primary 314.6 MB B fat16
pri/log FREE SPACE</pre>
<br />In other words, if you don't see <code>B K ESP</code> for the first partition, then it means that you didn't partition or format your drive as explained above and you will need to reference Appendix C (<i>Help, I screwed up my partitioning!</i>) to sort you out.</li>
</ul>
<ul>
<li>From there select the <code>FREE SPACE</code> partition and use the partition manager's menu to create two new primary partitions (one for swap and one for the root file system), until you have something like this:<br /><br />
<pre class="brush: text">MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
#1 primary 314.6 MB B K ESP
#2 primary 1.0 GB f swap swap
#3 primary 14.7 GB f ext4 /
</pre>
</li>
</ul>
<ul>
<li>Select <code>Finish partitioning and write changes to disk</code> and then <code>Yes</code> on <code>Write the changes to disks?</code> and let the installer continue with the base system installation.</li>
</ul>
<ul>
<li>After a while, the installer will produce a big red ominous message that says:<br /><br />
<pre class="brush: text">[!!] Configure the package manager
apt-configuration problem
An attempt to configure apt to install additional packages from the CD failed.</pre>
<br />
This, however, is actually a completely <b>benign</b> message and you can safely ignore it by selecting <code>Continue</code> . That's because, since we are conducting a net install, we couldn't care less about no longer being to access the "CD-ROM" files after install...</li>
</ul>
<ul>
<li>Once you have dimissed the message above, pick the mirror closest to your geographical location and let the installer proceed with some more software installation (this time, the software will be picked from that network mirror rather than the media).<br />When prompted for the "package usage survey" pick whichever option you like.</li>
</ul>
<ul>
<li>Finally, at the <code>Software selection</code> screen, select any additional software package you wish to install. Note that the "Debian desktop environment" should work out of the box if you decide to install it (though I have only tested Xfce so far). It's probably a good idea to install at least "SSH server".</li>
</ul>
<ul>
<li>Let the process finalize the software and GRUB bootloader installation and, provided you didn't screw up your partitioning (i.e. you saw <code>B K ESP</code> when you entered the partition manager, otherwise see Appendix C) select <code>Continue</code> to reboot your machine on the <code>Installation complete</code> prompt.</li>
</ul>
<br />
If everything worked properly, your system will now boot into your brand new vanilla Debian ARM64 system. Enjoy!<br />
<br />
<h3>
Post install fixes</h3>
<br />
Here are a few things that you might want to fix post install: <br />
<ol>
<li>You may find a <code>cdrom0</code> drive on your desktop, which can't seem to be accessible. This is a leftover from the installer process not knowing how to handle the installation media device. You should edit <code>/etc/fstab</code> to remove it.<br /> </li>
<li>If you installed the <code>cups</code> package, you may get an error while loading modules (<code>systemctl --failed</code> will report that <code>systemd-modules-load.service</code> is in failed state). This is all due to the current <code>cups</code> package trying to load IBM PC kernel modules... on a non PC device. To fix this, simply delete <code>/etc/modules-load.d/cups-filters.conf</code> and reboot.</li>
<li>If using UEFI firmware v1.6 or later, you can enable the serial console by editing <code>/etc/default/grub</code> and changing <code>GRUB_CMDLINE_LINUX=""</code> to <code>GRUB_CMDLINE_LINUX="console=ttyS0,115200"</code>, and then running <code>update-grub</code>.<br />You may also enable serial console access for GRUB by adding the following in the same file:<br /><code>GRUB_TERMINAL=serial</code><br /><code>GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --stop=1"</code> <br />
</li>
</ol>
<h3>
</h3>
<h3>
Appendix A: How to create and format the SD partition for installation</h3>
<br />
<b>IMPORTANT NOTE 1:</b> Please make sure to select the disk that matches your SD media before issuing any of these commands. Using the wrong disk will irremediably destroy your data!<br />
<br />
<b>IMPORTANT NOTE 2:</b> Do <u>not</u> be tempted to "force" FAT32 in <code>DISKPART</code> or <code>mkfs</code> and do not forget to set the bootable/active flag, else you will afoul of the issue described in Appendix C. <br />
<h4>
Windows </h4>
<pre class="brush: text">C:>diskpart
Microsoft DiskPart version 10.0.18362.1
Copyright (C) Microsoft Corporation.
On computer: ########
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 238 GB 0 B *
Disk 1 Online 465 GB 1024 KB *
Disk 4 Online 4657 GB 1024 KB *
Disk 5 Online 4649 GB 0 B *
Disk 6 Online 14 GB 14 GB
DISKPART> select disk 6
Disk 6 is now the selected disk.
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> convert mbr
DiskPart successfully converted the selected disk to MBR format.
DISKPART> create partition primary size=300
DiskPart succeeded in creating the specified partition.
DISKPART> active
DiskPart marked the current partition as active.
DISKPART> format fs=fat quick
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> exit
Leaving DiskPart...
C:>
</pre>
<br />
Note, if needed you can also force a specific partition type (e.g. <code>set id=0e</code> to force FAT16 LBA), but that shouldn't be needed as <code>DISKPART</code> should set the appropriate type accordingly.<br />
<br />
<h4>
Linux</h4>
<br />
The following assumes <code>/dev/sdf</code> is your SD/MMC device. Change it in all the commands below to use your actual device.<br />
<br />
(Optional) If your drive was partitioned as GPT, or if you're not sure, you may want to issue the two following commands first. If it's MBR you can skip this step:<br />
<br />
<pre class="brush: text"># Delete the primary GPT:
dd if=/dev/zero of=/dev/sdf bs=512 count=34
# Delete the backup GPT.:
dd if=/dev/zero of=/dev/sdf bs=512 count=34 seek=$((`blockdev --getsz /dev/sdf` - 34))</pre>
<br />
Now use <code>fdisk</code> and <code>mkfs</code> to partition the drive:<br />
<br />
<pre class="brush: text">root@debian:~# fdisk /dev/sdf
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x7d188929.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-31291391, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-31291391, default 31291391): +300M
Created a new partition 1 of type 'Linux' and of size 300 MiB.
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): e
Changed type of partition 'Linux' to 'W95 FAT16 (LBA)'.
Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
root@debian:~# mkfs.vfat -F 16 /dev/sdf1
mkfs.fat 4.1 (2017-01-24)
root@debian:~#</pre>
<br />
<h3>
Appendix B: Why do we need to use <code>-t vfat -o rw /dev/mmcblk0p1</code> as the CD-ROM device?</h3>
<ul>
<li>Why this weird device name with options? Because these are actually <code>mount</code> command line parameters and the Debian installer actually calls <code>mount</code> behind the scenes and feeds it exactly what we write here. This means we can hijack the device name field to invoke the additional <code>mount</code> parameters we need.</li>
</ul>
<ul>
<li>Why <code>/dev/mmcblk0p1</code>? That's simply name of the device for the first partition (p1) on the SD/MMC media (mmcblk0) as seen by the Linux kernel on a Raspberry Pi.</li>
</ul>
<ul>
<li>Why <code>-t vfat</code>? Because the Debian installer appends <code>fstype=iso9660</code> to the mount option which prevents automount and forces us to override the file system type.</li>
</ul>
<ul>
<li>Why <code>-o rw</code>? Because the Debian installer won't be able to use the first partition for <code>/boot/efi</code> otherwise or load the WLAN firmware from the media (you get a <code>device or resource busy</code> when trying to remount the media).</li>
</ul>
<br />
<h3>
Appendix C: Help I screwed up my partitioning!</h3>
<br />
Of course you did. You thought you knew better, and now you are paying the price...<br />
<br />
The problem in a nutshell is that:<br />
<ol>
<li>You can't use a regular ESP on a Raspberry Pi, on account that GPT or an MBR partition with type <code>0xef</code> are not handled by the Broadcom CPU bootloader. And there is nothing you can do about this, because this is a behaviour that's hardcoded in the CPU silicon itself.<br /> </li>
<li>The Debian installer's partition manager is very temperamental about what it will recognize as an ESP. In other words, if you don't use the perfect combination of boot flag, partition type and file system, it will fail to see it as an ESP.</li>
</ol>
Now the good news is that this is recoverable, but you need to know what you're doing.
<br />
<ul>
<li>The first thing you should do in the Debian partition manager is set the first partition to be used as ESP. In other words, you will need to edit the first partition until you get this:<br />
<pre class="brush: text">MMC/SD card #1 (mmcblk0) - 16.0 GB SD 2WCGO
#1 primary 314.6 MB B K ESP
pri/log FREE SPACE</pre>
</li>
</ul>
<ul>
<li>Then you can proceed as the guide describe, but you need to bear in mind that, as soon as you choose to write the partition changes, the partition manager will have changed your first partition type to 0xef, which, as we have seen is <b>ubootable</b> by the CPU. Therefore, DO NOT PROCEED WITH THE SYSTEM REBOOT AT THE END UNTIL YOU HAVE CHANGED THE PARTITION TYPE BACK.</li>
</ul>
<ul>
<li>To do that, once you get to the <code>Installation complete</code> prompt that asks you to select <code>Continue</code> to reboot, you need to press <code>Alt</code>-<code>F2</code> then Enter to activate a console.</li>
</ul>
<ul>
<li>Then type exactly the following command:<br />
<pre class="brush: text">chroot /target fdisk /dev/mmcblk0</pre>
Then press the keys <code>t</code>, <code>1</code>, <code>e</code>, <code>w<code></code></code>
<br />
</li>
</ul>
<ul>
<li>No you can go back to the installer console (<code>Alt</code>-<code>F1</code>) and select <code>Continue</code> to reboot</li>
</ul>
<br />
<h3>
Appendix D: Installing to a SD from an USB Flash Drive</h3>
<br />
As I explained above, and though it may seem simpler, I would discourage to use this method to install Debian on a Raspberry Pi. But I can understand that, if you don't have a card reader, you may be constrained to using this method.<br />
<br />
For the most part, this should work fine out of the box. As a matter of fact, if you do it this way, you won't have to fiddle with the "CD-ROM" media detection. However, I will now list some of the caveat you'll face if you proceed like:<br />
<br />
<b>Caveat 1</b>: If you use guided partitioning your SD/MMC media will be formatted as GPT (because this is a UEFI system after all) which the Broadcom CPU used in the Raspberry Pi can not boot. It has to be MBR. How you are supposed to force MBR over GPT in the Debian partition manager, I'll let you figure out.<br />
<br />
<b>Caveat 2</b>: Similarly, you need to go through the <code>0xef</code> to <code>0x0e</code> conversion of your ESP, as the Pi won't boot from that partition otherwise.<br />
<br />
<b>Caveat 3</b>: Of course you will also need to duplicate all the <code>bootcode.bin</code>, <code>fixup.bat</code> and so on from your USB boot media onto the SD ESP partition if you want it to boot (which is the reason why is is <b>much more convenient</b> to just set the ESP and Debian installer on the SD right of the bat, so you don't risk forgetting to copy a file).<br />
<br />
<b>Caveat 4</b>: When I tried USB to SD install, I found that the GRUB installer somehow didn't seem to create an <code>efi/boot/bootaa64.efi</code>, which, if left uncorrected, will prevent the system from booting automatically.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com43tag:blogger.com,1999:blog-8361942945221983453.post-91105526307165306852018-10-31T16:51:00.002+00:002018-12-11T11:06:59.465+00:00GitHub verified commits with GPG, TortoiseGit and MSYS/MinGWIf you've been browsing git repositories in GitHub, you may have seen that some of them have <i>Verified</i> commits, which is a nice way to indicate that the person who actually committed the code is indeed who they say they are, and not an impersonator who just happened to reuse an e-mail address that is not theirs, for dubious reasons.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-VHJ4_1GlOLQ/W9tY381kxFI/AAAAAAAACD4/AHntmJRn9PMFP0Twi2hirZjMot8-4AEZACLcBGAs/s1600/Image2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="431" data-original-width="1600" height="172" src="https://3.bp.blogspot.com/-VHJ4_1GlOLQ/W9tY381kxFI/AAAAAAAACD4/AHntmJRn9PMFP0Twi2hirZjMot8-4AEZACLcBGAs/s640/Image2.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Typical display of "Verified" GPG commits in GitHub</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
Obviously, if you are the only person who has write access to your github repositories (which is how I tend to operate, for obvious security reasons) verified commits are not that much of a big deal. Still, having the badge show in github does help with ensuring that people who are browsing the repo know that you are taking security and trust seriously. So we might as well add commit signing, since it's pretty straightforward to do.<br />
<br />
Now, since these are my main development tools, I will hereafter demonstrate how you can do that using TortoiseGit and MSYS/MinGW GPG on Windows. If you use something else, then you will have to look for post entries by other people, that match the tools you use. Also, to give credit where credit is due, I will point out that I am mostly copying Julian's dev.to entry titled <a href="https://dev.to/c33s/sign-your-git-commits-with-tortoise-git-on-windows-3mlf">"Sign your git commits with tortoise git on windows"</a>.<br />
<br />
So, without further ado, here's how you should proceed:<br />
<ol>
<li>Create a new GPG key by firing up a MinGW prompt and issuing the following:<br />
<br />
<pre class="brush: bash">$ gpg --full-generate-key --allow-freeform-uid
gpg (GnuPG) 2.2.10-unknown; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
gpg: keybox '/home/nil/.gnupg/pubring.kbx' created
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y
GnuPG needs to construct a user ID to identify your key.
Real name: Pete Batard
Email address: pete@akeo.ie
Comment:
You selected this USER-ID:
"Pete Batard <pete@akeo.ie>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /home/nil/.gnupg/trustdb.gpg: trustdb created
gpg: key F3E83EBB603AF846 marked as ultimately trusted
gpg: directory '/home/nil/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/nil/.gnupg/openpgp-revocs.d/236D8595DE48618C26293122F3E83EBB603AF846.rev'
public and secret key created and signed.
pub rsa4096 2018-10-31 [SC]
236D8595DE48618C26293122F3E83EBB603AF846
uid Pete Batard <pete@akeo.ie>
sub rsa4096 2018-10-31 [E]
</pre>
<br />
You'll notice that, when prompted, we chose to create a 4096 RSA and RSA key that never expires.<br /><br />During that process, you will also be prompted to enter the password that safeguards your key. This is the password you will have to enter each time you sign a new commit, so choose it wisely.<br /><br />Note that, when using MSYS2 + MinGW, your GPG keys will be stored under <code>C:\msys2\home\<your_user_name>\.gnupg\</code>.<br /> </li>
<li> Generate the public key in a format that GitHub can accept:<br />
<br />
<pre class="brush: bash">$ gpg --armor --export pete@akeo.ie
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFvZ0+gBEAC7Jkdt3aW5iURti+36suQN9dmhGfVJMEV/Y9giby78wYcq51rj
IvJ2AuYEhVgiFwT2hrlKuems0Jsln6wGUULAQXpLMU4XxlyKHwBE3ETXCXWQbzxH
rNqerDKNu54M/r3XNCW7r38vwNdYrh656eLccZ/jOH8aSSZ9KkBjJ1wa78tx7YZy
+FXXjDbamP3Pu3CPp7Nx3y69FCFm2uYrDkLWqcOvweME9imIqdsLfd5bM+wYclbN
QQuZArV7uoQ2xYFlVweaob5U3iUsGUQYuY7x3Mlbz/73wYxuOGUt5n6de3tdefrN
V5csD3aJVQKjFWOW2oNzI8Qik9pDie+3XQEfbIVHhgCx9kLVe2MzBaWrnPgk2Epj
bIhRheqzvV15iC70QchMrtDzXOcbNhaytggYWPRx1YtEN3G4pPnsVfq0oSdNhwlw
VLYm6eK+kjr0PykIANiiDDe/4WiFTIS1mobp++QCFXm41jtfXP6PM3NJdf1Hx5VX
CcRQKXmukeyW4DfYtr9GoKeu9G1vGQev1U+qjtOk+9SRrofsqfCqzJP4drjbSyk9
43q9HBYSBjnslisQnrhhcl5/5Yb99+sS2EnpW7am/sarCHGiPkLi6eHfYpbxX7Lg
nLXjmXYlpyCkJnkgwzsTUs3+7w2KHaBZ7yme70x2edBD9f1Ar3zm+ryW5QARAQAB
tBpQZXRlIEJhdGFyZCA8cGV0ZUBha2VvLmllPokCTgQTAQgAOBYhBCNthZXeSGGM
JikxIvPoPrtgOvhGBQJb2dPoAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJ
EPPoPrtgOvhGUpkQAIHSu7BNo4/jUhtHjBMiiYVE6eJh1J8+lWkXuATCxo3BXrMb
AAAdNsrPca09NVdSli3xameKSnWt3hXRpkNM2cAC/Sus8UjYGDaCP1pWNyfmd70y
/uAZGf1FIeWL4yIiFcDROobLqlCE+qViWu8sG2Ris8hGA8sjR0cn5891Q/ncHFtE
YYHzh0mn+A9I/gGSvArqYJdNNBptGplo2fnQQIODwHNYSPMCzBawFoll6jocjg6q
FqlawC5f9zPs5HP9k0k0pp37f8i+ANftCfdwOEWurfBDGqrxKiJIyIaS9kLzwCQX
poJGZO/rVbCDGvexfVkqoKMJRK0jO2Rh3p0vifZ2cwKPSFjWfSjUiPAUpcz0nuV5
BSkrMNc1VHgP1FM4v2Vpi7lnaoWMLpVz3VJ8yRyRD/7c7oVEl0NL8lHMZaHiPprf
LmeLIgM5ndh9wkvD9j2EH5JR72lACQtg5n9qmbDro2uJbtGqrhqrVQdPrPtv1XoM
0JAIL+1RvdTuPPBclmTLwdXaztlnEjJOA9loWpkyMIlZVcb/6TWamGAzxu4wMv8o
aQpaVqNIO9kq79lZMHFGDE4VRHAjrJh3nXKpi+/JOIf7xKAnwrZAquAC+bfqYYUm
W9jg35aB+jASlI7+TvQHgal2dFSYebCeWpwPlJr7XeXWJab+UNajeKxRQ2wMuQIN
BFvZ0+gBEAC6nJAWbF6YAnPDaHTTBAAYEHlbiPTt8gYUgoxkUJxV2fcj0g2ye0+x
gFh7Z3eTw5zq3iojah8EWBj5WOHeI1R1q244qaje467onbgowcxsFOH/TgBs1aew
DWNDIMJl/vkSEY5xdmtJIGIUJ/+BH9U7kSX3lB5IFz37WH6hcgQZUjD0fx+Hv5ZX
7Fz8YGXnBnJRwblCJbvkq2BD/1fSI5REddILkQAKd9mzRoXFvKRYwV5Oq78NU4cd
5e20+ALHCPC7fQQ3jFzUo2WMLywWDAi42DOn7E6/tIZT7BwKF08ozNDPpWTj5OOO
OAqjesgsXI410kdayv25LopHnnPCcIcjm35AtA8TDSEfPFlbm59tBo7q5VWi15yb
X1+vkSZfcUoe9lXIr/Ea+RYgayI8xFkBiOlWn8NaWjWrZEr6OG4EOk97bAgey4M4
KEJJkQsQYsVSQ8yVkt1wETkH6GHQFoyoFJUJkxeWDXoG9LyBYr7n+NSbjOAujy/c
XyemCFkJXSeTcn4KAIboBvEV0nQOMjfaEr+hkfXbESfm92MSlL54arrgyY7vcOSI
iztc4ZiTmkQPeeG4PsqUaHYB1lj+qapVQlZ9O+OFH280YWylLBZJMWOKM1lMqgz3
Z2avF2FVax+xBeE8pMnWAUbKTHB7BQAhATjxGGlWy6QtJRxpOrTcGwARAQABiQI2
BBgBCAAgFiEEI22Fld5IYYwmKTEi8+g+u2A6+EYFAlvZ0+gCGwwACgkQ8+g+u2A6
+EbNAQ//WL261oYfKskEmBzz88M7Tt6aj8NyQmXyrIY6RoEYK4+rnS2zFwQfIF6p
3e4avUZYF5xTOSuuiJv4IImnjlilHjA+r6LcmqIGKilIeFQwyNLVr+H/FvZSzKYY
Psr6v0CCBn/6UICmrLoDgr1IiWmlwVDKVNXDZLGHprB00WBrso0pBVWEmbkKzlP9
lYlC11yXo/wsLLnQNbz3DzcUgtyFExyL37EGr1zw2xfmwmRZRQmpILpuiBE/VGI0
pH4JReeGjcqh0TkK+70whQnM9VX6eZbV4cwtBXg1CixY+cwyQcCreRTneGPQT9jj
5dmD9duQOiDw5QGAoQ4tc6AxQcf62KsZmXQ715IMVrbn3leeoVR5PaFQ/PR3MQn+
eS0f+wIDLBgD1tjUeOvjWs79sB7LAvinndZUA/6+nfxR29753gpssFW5tFEK5Kit
OwCnNG4P3SjqfYAN+IIBTUUUPjGPHTKEd85XUBUlCJg7i1iLaeZqamp9oga4gv6d
lLQ50J84i4yk02Afhlic5CNw1l9TfCgdFWF/9+WO7qzHmdJsZl/9Gs05J3hbPzqh
uji6ujyI7v9vDTDC2tR1l3zHTomFJ6Vs42MdpaBWtnePAIohnhtLKCjG3/Z04idj
jjGTV+5EASM2h3WV7vfmxem2HyxEM0lwa5zj8AtaWugqmiO6Rik=
=aMFF
-----END PGP PUBLIC KEY BLOCK-----
</pre>
</li>
<li>Go to <a href="https://github.com/settings/gpg/new">https://github.com/settings/gpg/new</a> and copy/paste the public key data from above.<br /> </li>
<li>Because we are going to call GPG independently of MinGW, now you must copy your <code>C:\msys2\home\<your_user_name>\.gnupg</code> directory to <code>C:\Users\<your_user_name>\</code>.<br /><br /> This is needed because this is the default location <code>gpg.exe</code> looks for key when not invoked from msys/MinGW and it doesn't seem possible to alter it without modifying the registry or creating environment variables, which is cumbersome. Besides, this is important data and you are a lot more likely to backup the content of <code>C:\Users\<your_user_name>\</code> than <code>C:\msys2\home\</code>, so it's probably not a bad idea to duplicate this valuable content there.<br /> </li>
<li>Get the key id that you'll need to use in your config file with:<br />
<pre class="brush: bash">$ gpg --list-keys --keyid-format LONG pete@akeo.ie
gpg: checking the trustdb
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub rsa4096/F3E83EBB603AF846 2018-10-31 [SC]
236D8595DE48618C26293122F3E83EBB603AF846
uid [ultimate] Pete Batard <pete@akeo.ie>
sub rsa4096/308A9C6106D2FCE4 2018-10-31 [E]</pre>
<br />The 40 characters hex string under pub is the value you are after.<br /> </li>
<li>In each project where you want to have signed commits, edit your <code>.git/config</code> so that it contains the following options:<br />
<pre class="brush: text">[user]
signingkey = 236D8595DE48618C26293122F3E83EBB603AF846
[commit]
gpgsign = true
[gpg]
program = "C:/msys2/usr/bin/gpg.exe"</pre>
</li>
</ol>
If you do the above correctly, then next time you commit into the git repo you modified, you should be prompted for your GPG key password, and, after you push to GitHub, you should find that the commit has the <i>Verified</i> badge.<br />
<br />
Note that you can also validate whether your commit was properly signed, before pushing, by issuing:<br />
<pre class="brush: bash">$ git log --show-signature</pre>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-3017373910762569742017-05-17T00:12:00.000+01:002018-07-26T21:43:36.320+01:00Using a YubiKey to store code signing certificates<h3>
Preamble (skip this if you only want the How To)</h3>
If you are a Windows software developer and/or distributor, then, by all means, you are well aware that you should always digitally sign your software, so that a minimum level of accountability and trust can be established between yourself and your users.<br />
<br />
As you should also know, this process is usually accomplished by acquiring a Windows Authenticode credential (a credential is a certificate + its associated private key) which can then be used to digitally sign binary executables.<br />
<br />
However, one must also consider the security aspect of the signing process and realize that, given the faintest opportunity, ill-intentioned people will try to grab your code signing credentials, if they can. For instance, perhaps you are already aware that the NSA's stuxnet virus was signed using credentials that were <b>stealthily duplicated</b> from JMicron and Realtek, and that, outside of state sponsored endeavours, malware authors are also exceedingly interested in acquiring data they could use to steal the identify of a trustworthy person, even more so if that person or entity is producing popular software.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://www.yubico.com/wp-content/uploads/2015/11/YubiKey-4-laptop-angle-finger-01-720x720.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://www.yubico.com/wp-content/uploads/2015/11/YubiKey-4-laptop-angle-finger-01-720x720.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">(Image credits: Yubico.com)</td></tr>
</tbody></table>
This means that, should malware find its way on your development machine (as part of an infected development tool for instance which malware authors are likely to target if they can, as it can mean a huge payoff), it'll most likely be able to steal BOTH your credential and its private key password, since one can only expect semi-competent malware to implement both a disk scanning and a keylogging facility.<br />
<br />
Therefore, as a code signing developer, you're only ever one dodgy software installation away from finding that your credential(s) and protectivve password(s) have been exfiltrated into very wrong hands indeed...<br />
<br />
As a result, it doesn't take a paranoid person to realize that storing credentials on disk, even if it's a removable USB flash drive, that you only plug when signing binaries, is a very very bad idea. Similarly, if your alternative is to store your signing credentials into the Windows certificate store, and expect that it'll be enough, you should probably realize that the level of a <b>software-only </b>security solution on Windows goes about as far as the distance you can throw a chair... with Steve Ballmer sitting on it.<br />
<br />
Thus, what you really want, is have your credentials stored on a <b>dedicated</b> removable device, <b>that's designed precisely to protect that kind of stuff</b>. And this is where <a href="https://en.wikipedia.org/wiki/FIPS_201" target="_blank">FIPS 201</a> Personal Identity Verification (PIV) devices, and especially the convenient and relatively affordable <a href="https://www.yubico.com/products/yubikey-hardware/" target="_blank"><b>YubiKeys</b></a> from <a href="https://www.yubico.com/" target="_blank">Yubico</a> come into play.<br />
<br />
<br />
For the record, most of the process described below can likely be applied to any FIPS 201 PIV device, but since YubiKeys are what I use, I'll focus only on YubiKey usage.<br />
<br />
<h3>
Prerequisites</h3>
<br />
First of all, it is important to note that not all YubiKeys are created equal. Especially, <a href="https://www.yubico.com/products/yubikey-hardware/" target="_blank">the cheapest YubiKey model does <b>NOT</b> have PIV support</a>. Thus, if you plan to use a YubiKey for the purpose of signing code, you should steer away from the FIDO U2F Security Key model, as it is incompatible with this procedure. With this being said, the prerequisites are as follow:<br />
<ul>
<li>Any YubiKey model <b>EXCEPT the FIDO U2F Security Key.</b> My preference goes for the <a href="https://www.yubico.com/product/yubikey-4-series/#yubikey-4" target="_blank">YubiKey 4</a>, but anything that has the PIV feature will do.</li>
<li>Your code signing credentials, which you have obtained from your Certification Authority, <b>temporarily</b> saved as a <code>.p12</code> file (Note: You may have to use the Windows certificate store export feature to get to that file, and <a href="https://www.digicert.com/code-signing/exporting-code-signing-certificate.htm">follow the procedure highlighted here</a>, if your CA only delivers signing credentials into the certificate store)</li>
<li>The latest version of YubiKey PIV Manager, which you should download and install <a href="https://www.yubico.com/products/services-software/download/smart-card-drivers-tools/" target="_blank"><b>from here</b></a>.</li>
</ul>
<h3>
Storing your code signing credential into a YubiKey</h3>
<ol>
<li>Open PIV Manager (<code>pivman.exe</code>). You may have to go fetch it from its installation directory if it did not create a Start menu entry, as was the case on my machine:<br /><br /><a href="https://1.bp.blogspot.com/-8NioMsZgUe0/WRtvn25eMmI/AAAAAAAAAWg/ag3Odhzr9lA6Qhh2zE4b_J96jdT05EyRACEw/s1600/Yubi_Image0.png"><img border="0" height="224" src="https://1.bp.blogspot.com/-8NioMsZgUe0/WRtvn25eMmI/AAAAAAAAAWg/ag3Odhzr9lA6Qhh2zE4b_J96jdT05EyRACEw/s400/Yubi_Image0.png" width="400" /></a></li>
<br />
<li>Plug your YubiKey. If this is the first time you use it, you will be greeted by the following screen asking you to set a PIN:<br /><a href="https://4.bp.blogspot.com/-RzQei15sSqA/WRtvngi_eNI/AAAAAAAAAWc/cY7OHaMn1CA52w1MVlIYADPVIniIc0LyACEw/s1600/Yubi_Image1.png" imageanchor="1"><img border="0" height="360" src="https://4.bp.blogspot.com/-RzQei15sSqA/WRtvngi_eNI/AAAAAAAAAWc/cY7OHaMn1CA52w1MVlIYADPVIniIc0LyACEw/s400/Yubi_Image1.png" width="400" /></a><br />Under "Management Key" you should keep the "Use PIN as key" option checked.<br /><br />On the other hand, since you're going to use that key for code signing on Windows, you can disregard the cross-platform compatibility recommendation, as I haven't seen any issues with using a PIN with extended alphanumeric characters on Windows, and, with a length of 8 characters, the PIN is already short enough as it is.<br /><br /> One thing I should point out is that, just like with a credit card, the device only gives you 3 attempts at entering the right PIN before locking itself (which is <b>exactly what you want from a device that stores valuable data</b>) so keep that in mind when you use it. Of course, a YubiKey can always be reset if locked, but you will lose access to the credentials stored on it.</li>
<br />
<li>Once you have set the PIN, you should see the following screen, where you need to click the "Certificates" button:<br /><a href="https://3.bp.blogspot.com/-CSFi0rgaGjA/WRtvoE2-clI/AAAAAAAAAWo/cW-xhFFELA8njn0ziR4Fev05wfrFVID2ACEw/s1600/Yubi_Image3.png" imageanchor="1"><img border="0" height="224" src="https://3.bp.blogspot.com/-CSFi0rgaGjA/WRtvoE2-clI/AAAAAAAAAWo/cW-xhFFELA8njn0ziR4Fev05wfrFVID2ACEw/s400/Yubi_Image3.png" width="400" /></a></li>
<br />
<li>On the Certificates screen, select the "Digital Signature" tab:<br /><a href="https://4.bp.blogspot.com/-IFBeHOK2MZE/WRtvodhiOzI/AAAAAAAAAWs/ttpPGVoUwKEhDSLOTArNATASIs4v0FrtwCEw/s1600/Yubi_Image4.png" imageanchor="1"><img border="0" height="167" src="https://4.bp.blogspot.com/-IFBeHOK2MZE/WRtvodhiOzI/AAAAAAAAAWs/ttpPGVoUwKEhDSLOTArNATASIs4v0FrtwCEw/s400/Yubi_Image4.png" width="400" /></a></li>
<br />
<li> Click "Import from file" and select your <code>.p12</code> code signing credential. You will be prompted by a password, which of course is the password for the private key of your <code>.p12</code> (and not the key's PIN).</li>
<br />
<li>If everything goes well, you will see the following notice, which you should follow by unplugging your YubiKey:<br /><a href="https://2.bp.blogspot.com/-5jNaVT_DKcU/WRtvoW_a63I/AAAAAAAAAWw/1IDfKdAGTe8TytwDOSWvM6k9JfIUbejcQCEw/s1600/Yubi_Image6.png" imageanchor="1"><img border="0" height="238" src="https://2.bp.blogspot.com/-5jNaVT_DKcU/WRtvoW_a63I/AAAAAAAAAWw/1IDfKdAGTe8TytwDOSWvM6k9JfIUbejcQCEw/s400/Yubi_Image6.png" width="400" /></a></li>
<br />
<li>After re-plugging your YubiKey, and going back to the "Digital Signature" certificate, you should see details about the installed credential, which is ready to be used for code signing:<br /><a href="https://4.bp.blogspot.com/-0jE_Ctx8gwE/WRtvolhXAvI/AAAAAAAAAW0/WCUqgeEX6X41OfNI9rY_5Q2dL6Qyoo_ugCEw/s1600/Yubi_Image7.png" imageanchor="1"><img border="0" height="155" src="https://4.bp.blogspot.com/-0jE_Ctx8gwE/WRtvolhXAvI/AAAAAAAAAW0/WCUqgeEX6X41OfNI9rY_5Q2dL6Qyoo_ugCEw/s400/Yubi_Image7.png" width="400" /></a></li>
</ol>
<h3>
Bonus: Storing more than one code signing credential onto your YubiKey</h3>
If you are producing Windows software that still needs to target platforms like Vista or XP, you might be saying: <i>"That's all very well, but what if I need to sign my software with both an SHA-1 and SHA-256 Authenticode credential? There's only one Digital Signature slot on the YubiKey after all..."</i><br />
<br />
Well, the thing is, this is one of the exact issues I have been faced with for Rufus, and I can tell you that, as far as code signing is concerned, the labels assigned to the certificate/credential storage slots are pretty much irrelevant. You can use any of these 4 slots to store any code signing credential you want (since they are referenced by their fingerprint), and we only used the "Digital Signature" PIV slot because that's the one that makes most sense for storing a code signing signature. However, if you also want to store an SHA-1 credential, you can use any of the remaining slots to do that.<br />
<br />
My preference is to use the optional "Card Authentication" slot to store your extra SHA-1 credential (so that you can use the "Authentication" and "Key Management" for actual authentication or key management if you ever need to). At least, this is what I have been doing for double-signing my Rufus application, and neither <code>SignTool</code> or the YubiKey seem to have any trouble with that.<br />
<br />
<h3>
Using the stored credentials with SignTool</h3>
<br />
Okay, so you have your code signing credential(s) safely stored on a secure YubiKey. Now what?<br />
Clearly you can't use <code>SignTool</code> in the usual fashion, where you reference a local <code>.p12</code> or <code>.pfx</code> file.<br />
<br />
Instead, and especially if you have multiple code signing credentials residing on it, because the YubiKey is automatically detected as a credentials storage device by Windows, what you want to do is reference your credentials by their unique SHA-1 fingerprint in SignTool, and let Windows/YubiKey handle the rest. This is exactly what the <code>/sha1</code> flag of SignTool is for.<br />
<br />
However, before we can do that, we need to figure out the SHA-1 fingerprint of your certificate.<br />
The simplest way to do that, while ensuring that you are really going to be accessing the credentials that you want to access, is:<br />
<br />
<ol>
<li>Go back to PIV Manager, and open the slot where the credential you are after resides:<br />
<a href="https://4.bp.blogspot.com/-0jE_Ctx8gwE/WRtvolhXAvI/AAAAAAAAAW0/WCUqgeEX6X41OfNI9rY_5Q2dL6Qyoo_ugCEw/s1600/Yubi_Image7.png" imageanchor="1"><img border="0" height="155" src="https://4.bp.blogspot.com/-0jE_Ctx8gwE/WRtvolhXAvI/AAAAAAAAAW0/WCUqgeEX6X41OfNI9rY_5Q2dL6Qyoo_ugCEw/s400/Yubi_Image7.png" width="400" /></a></li>
<br />
<li>Click "Export Certificate" and save the file as a <b>.crt</b> (you will need to type the extension as part of the file name)</li>
<br />
<li>Double click on the .crt you just saved and go to the "Details" tab</li>
<br />
<li>Scroll down to the "Thumbprint" field (should be the very last) and copy its content. This is the SHA-1 fingerprint you are after:<br /><a href="https://1.bp.blogspot.com/-J8UOtwK0D3o/WRuBcdrKXJI/AAAAAAAAAXE/Cass5EQKnCgrtIlg_s7Gl5gspwdFmEAbwCLcB/s1600/Yubi_Image8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-J8UOtwK0D3o/WRuBcdrKXJI/AAAAAAAAAXE/Cass5EQKnCgrtIlg_s7Gl5gspwdFmEAbwCLcB/s400/Yubi_Image8.png" width="312" /></a></li>
</ol>
Now you can use <code>SignTool</code> with <code>/sha1</code> instead of <code>/f</code> and when you do so, you will be prompted to plug your YubiKey (if it isn't plugged in) as well as your PIN, which, if you enter successfully, will enable the signature operation.<br />
<br />
I'll conclude with a real life example, using a YubiKey 4 where I store both an SHA-256 code signing credential (fingerprint <code>5759b23dc8f45e9120a7317f306e5b6890b612f0</code>) and an SHA-1 credential (fingerprint <code>655f6413a8f721e3286ace95025c9e0ea132a984</code>), that I use to sign and timestamp the dual SHA-1+SHA-256 Rufus binary:<br />
<br />
<pre class="brush: text">SignTool sign /v /sha1 655f6413a8f721e3286ace95025c9e0ea132a984 /fd SHA1 /tr http://sha256timestamp.ws.symantec.com/sha256/timestamp rufus.exe
SignTool sign /as /v /sha1 5759b23dc8f45e9120a7317f306e5b6890b612f0 /fd SHA256 /tr http://sha256timestamp.ws.symantec.com/sha256/timestamp rufus.exe</pre>
<br />
<h3>
IMPORTANT NOTE: Do *NOT* let Windows install the Yubikey Minidriver as part of Windows Update! </h3>
<br />
It looks like the latest versions of Windows insist on installing a Yubikey Minidriver, which ends up wrecking havoc on your ability to actually use a Yubikey as a signing device. If you let Windows have its way, you may end up getting the a message stating <span style="font-family: "courier new" , "courier" , monospace;">The smart card cannot perform the requested operation or the operation requires a different smart card</span> when attempting to sign your binary:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-WCFrtijcc_M/Wth-0fZoJZI/AAAAAAAAA2g/kBKCRAXGbJgISTAj5g-_i-eGufkSdNODACLcBGAs/s1600/Image1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="330" data-original-width="456" height="288" src="https://1.bp.blogspot.com/-WCFrtijcc_M/Wth-0fZoJZI/AAAAAAAAA2g/kBKCRAXGbJgISTAj5g-_i-eGufkSdNODACLcBGAs/s400/Image1.png" width="400" /></a></div>
<br />
<br />
If you get this issue, just go to your installed software and <b>delete</b> the Yubikey Smart Card Minidriver:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-KxvpbAN7WpY/Wth_mQT5BRI/AAAAAAAAA2o/h3q-KcuBitYexrZH3FcXDCECvAVdMRiDQCLcBGAs/s1600/Image2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="337" data-original-width="555" height="242" src="https://2.bp.blogspot.com/-KxvpbAN7WpY/Wth_mQT5BRI/AAAAAAAAA2o/h3q-KcuBitYexrZH3FcXDCECvAVdMRiDQCLcBGAs/s400/Image2.png" width="400" /></a></div>
<br />
Once you have done that, you should find that you can use your Yubikey for signing applications again.<br />
<br />
Or, if you want to use the MiniDriver, you can follow the steps highlighted <a href="https://github.com/Yubico/yubikey-piv-manager/issues/24#issuecomment-405423404">here</a>.<br />
<br />
<h3>
Final words</h3>
Now that you've seen how to do it, I would strongly urge you to go and purchase a YubiKey (or any other FIPS 201 PIV device) and NEVER, EVER again store code signing credentials on anything else than a secure password protected device that was designed precisely for this.<br />
<br />
This means that, once you have done all of the above and validated that it works, <b>you should DELETE your <code>.p12</code>/<code>.pfx</code> and remove any trace of your credential(s) from your computer</b>.<br />
<br />
Of course, if you are really worried, you may still choose to store a copy of said credential(s), on a backup CD-ROM (preferably in a password protected archive), that you'll only store in a locked place. But by all means, if you have a working YubiKey, you should not let your code signing credential(s) anywhere near any of the computers that you own!Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com4tag:blogger.com,1999:blog-8361942945221983453.post-16342184348736647382017-05-15T16:40:00.002+01:002017-11-20T11:45:53.852+00:00Compiling desktop ARM or ARM64 applications with Visual Studio 2017Unlike what I was led to think, and despite the relative failure of Windows RT (which was kind of a given, considering Microsoft's utterly idiotic choice for its branding), it doesn't look like Microsoft has abandoned the idea of Windows on ARM/ARM64, including allowing developers to produce native desktop Windows applications for that platform.<br />
<br />
However, there are a couple caveats to work through, before you can get Visual Studio 2017 to churn out Windows native ARM/ARM64 applications.<br />
<br />
<h3>
Caveat #1 - Getting the ARM/ARM64 compiler and libraries installed</h3>
<br />
In Visual Studio 2017 setup, Microsoft seem to have done their darnedest to prevent people from installing the MSVC ARM compilers. This is because, they are not allowing the ARM development components to be listed in the default "Workloads" view, and if you want them, you will need to go fetch them in the "Individual components" view as per the screenshot below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-ywUtwWRyoSo/Wf0BabB2SLI/AAAAAAAAAZs/W3OcfqGP9L05QNjnTJ9mAwrSqhpP9PGWwCLcBGAs/s1600/VS2017_Individual_Components.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1020" data-original-width="1600" height="404" src="https://4.bp.blogspot.com/-ywUtwWRyoSo/Wf0BabB2SLI/AAAAAAAAAZs/W3OcfqGP9L05QNjnTJ9mAwrSqhpP9PGWwCLcBGAs/s640/VS2017_Individual_Components.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
It is important to note that you will need Visual Studio 2017 Update 4 (version 15.4) or later for the ARM64 compiler to be available for installation. ARM64 was <b>silently</b> added by Microsoft late in the Visual Studio 2017 update cycle, so if it's ARM64 you are after, you will need an updated version of the Visual Studio installer.<br />
<br />
<h3>
Caveat #2 - Error MSB8022: Compiling Desktop applications for the ARM/ARM64 platform is not supported. </h3>
<br />
Okay, now that you have the ARM/ARM64 compiler set, you created your nice project, set the target to ARM/ARM64 (while cursing Microsoft for even making that step painful), hit "Build Solution", and <i><b>BAM!</b></i>, you are only getting a string of:<br />
<br />
<pre class="brush: text">Toolset.targets(53,5): error MSB8022: Compiling Desktop applications for the ARM platform is not supported.</pre>
<br />
<br />
What gives?<br />
<br />
Short answer is, Microsoft doesn't really want <b>YOU</b> to develop native desktop ARM/ARM64 applications. Instead, they have this grand vision where you should only develop boring, limited UWP interpreted crap (yeah, I know the intermediary CIL/bytecode gets compiled to a reusable binary on first run, but it is <b>still</b> interpreted crap - I mean, if it is so great, then why isn't the Visual Studio dev env using it?), that they'll then be able to provide from the App Store. God forbid, in this day and age, you would still want to produce a native win32 executable!<br />
<br />
So, they added an extra hurdle to produce native ARM/ARM64 windows binaries, which you need to overcome by doing the following:<br />
<br />
<ol>
<li>Open every single <code>.vcxproj</code> project file that is part of your solution</li>
<li>Locate all the <code>PropertyGroup</code> parts that are relevant for ARM/ARM64. There should usually be two for each, one for <code>Release|ARM[64]</code> and one for <code>Debug|ARM[64]</code> and they should look something like this:<br /><br /><pre class="brush: xml"> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|ARM[64]'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<CharacterSet>Unicode</CharacterSet>
<WholeProgramOptimization>true</WholeProgramOptimization>
<PlatformToolset>v141</PlatformToolset>
</PropertyGroup> </pre>
<br />
</li>
<li>Insert a new <code><WindowsSDKDesktopARMSupport>true</WindowsSDKDesktopARMSupport></code> or <code><WindowsSDKDesktopARM64Support>true</WindowsSDKDesktopARM64Support></code> property, so that you have (ARM64):<br /><br /><pre class="brush: xml"> <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|ARM64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<CharacterSet>Unicode</CharacterSet>
<WholeProgramOptimization>true</WholeProgramOptimization>
<PlatformToolset>v141</PlatformToolset>
<WindowsSDKDesktopARM64Support>true</WindowsSDKDesktopARM64Support>
</PropertyGroup> </pre>
<br />
</li>
</ol>
If you do just that, then Visual Studio should now allow you to compile your ARM application without complaining. However, this may not be the end of it because...<br />
<br />
<h3>
Caveat #3 - You may still be missing some libraries unless you use a recent SDK</h3>
<br />
Depending on the type of desktop application you are creating, you may find that the linker will complain about missing libraries. Some of which are fairly easy to sort out, and are due to the Win32 and x64 default configs being more forgiving about not explicitly specifying libraries such as <code>gdi32</code>, <code>advapi32</code>, <code>comdlg32.lib</code> or <code>shell32</code> for linking. However, if you are using the default 8.1 SDK in your project, you may also find that some very important libraries are missing for ARM, such as <code>setupapi.lib</code> and other ones. These libraries seem to only have been added by Microsoft recently, so it might be a good idea to switch to using a recent Windows SDK in your project, such as 10.0.15063.0:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-MB7A_d8C7FU/WRnK9hFbyRI/AAAAAAAAAWA/1GJ5wd6TOjY9SyzWdJfl4tdHuRibMK3hwCLcB/s1600/VS2017_SDK.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="436" src="https://4.bp.blogspot.com/-MB7A_d8C7FU/WRnK9hFbyRI/AAAAAAAAAWA/1GJ5wd6TOjY9SyzWdJfl4tdHuRibMK3hwCLcB/s640/VS2017_SDK.png" width="640" /></a></div>
<br />
Be mindful however that the default SDK that Visual Studio automatically installs if you select the C/C++ components is not full-featured, and will be missing important libraries for ARM/ARM64 compilation, so there again, you <b>must</b> go to the individual components and explicitly select the SDK you want, so that the full version gets installed.<br />
<br />
With all the above complete, you should now be able to join the select club of developers who are able to produce actual native Windows applications for ARM/ARM64, and this just in time for the release of Windows 10 on ARM64.<br />
<br />
Of course, and as usual, if you want a real-life example of how it's done, you can look at <a href="https://github.com/pbatard/rufus" target="_blank">how Rufus does it</a>...Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com3tag:blogger.com,1999:blog-8361942945221983453.post-73335835427569833202016-05-18T13:06:00.001+01:002016-05-22T01:33:25.390+01:00Adding a new driver to an existing UEFI firmwareFollowing up on <a href="http://pete.akeo.ie/2014/06/so-i-built-ntfs-efi-driver.html" target="_blank">efifs</a> and <a href="http://pete.akeo.ie/2011/06/extracting-and-using-modified-vmware.html" target="_blank">VMWare firmware extraction</a>, you might be interested to find out how, for instance, you should proceed to add an NTFS EFI driver to an existing UEFI firmware, so that you can access/boot NTFS volumes from UEFI.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-uZQhksT9UEQ/VzxZQnPkH2I/AAAAAAAAAR0/8V_QttrQUfYbboBX9ybCLAgka_kxJJmpQCK4B/s1600/Image8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-uZQhksT9UEQ/VzxZQnPkH2I/AAAAAAAAAR0/8V_QttrQUfYbboBX9ybCLAgka_kxJJmpQCK4B/s1600/Image8.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Reading NTFS volumes natively from UEFI</td></tr>
</tbody></table>
<br />
If that's the case, then look no further than <b><a href="https://github.com/pbatard/efifs/wiki/Adding-a-driver-to-a-UEFI-firmware" target="_blank">this guide</a></b>.<br />
<br />
It provides a step by step breakdown, using VMWare, of how you can generate an UEFI firmware module from an EFI driver executable (through <a href="https://github.com/pbatard/ffs" target="_blank">FFS</a>, which is a convenient repackaging of the EDK2's <code>GenSec</code> and <code>GenFfs</code>), and insert it into an existing UEFI firmware to make the driver natively available:<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-cW3V8povj7I/VzxaQnC6ZvI/AAAAAAAAASA/O3IANsOxyp0iz_gQ6hbZu3cGopABNNqmwCK4B/s1600/Image6.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://3.bp.blogspot.com/-cW3V8povj7I/VzxaQnC6ZvI/AAAAAAAAASA/O3IANsOxyp0iz_gQ6hbZu3cGopABNNqmwCK4B/s1600/Image6.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A VMWare UEFI firmware with a native NTFS driver</td></tr>
</tbody></table>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com6tag:blogger.com,1999:blog-8361942945221983453.post-38358578174981032172016-05-16T22:41:00.001+01:002021-01-22T16:04:33.163+00:00Help, I lost all networking on my Raspberry Pi!This happened to me the other day, as I was upgrading a Pi system from Debian Jessie to Sid.<br />
<br />
After reboot, I suddenly got the following warning in the boot log:<br />
<pre class="brush: text">[FAILED] Failed to start Raise network interfaces.
See 'systemctl status networking.service' for details.</pre>
And of course, issuing <code>ifconfig</code> returned the dreaded output with only loopback:<br />
<pre class="brush: bash">root@pi ~ # ifconfig
lo: flags=73<up> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 4 bytes 240 (240.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 240 (240.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</host></up></pre>
A look at the suggested <code>systemctl status networking.service</code> yielded no better results:<br />
<pre class="brush: bash">root@pi ~ # systemctl status networking.service
• networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/generator/networking.service.d
└─50-insserv.conf-$network.conf
Active: failed (Result: exit-code) since Mon 2016-05-16 22:05:36 IST; 1min 2s ago
Docs: man:interfaces(5)
Process: 296 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 288 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 296 (code=exited, status=1/FAILURE)
May 16 22:05:36 pi systemd[1]: Starting Raise network interfaces...
May 16 22:05:36 pi ifup[296]: Cannot find device "eth0"
May 16 22:05:36 pi ifup[296]: Failed to bring up eth0.
May 16 22:05:36 pi systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
May 16 22:05:36 pi systemd[1]: Failed to start Raise network interfaces.
May 16 22:05:36 pi systemd[1]: networking.service: Unit entered failed state.
May 16 22:05:36 pi systemd[1]: networking.service: Failed with result 'exit-code'.</pre>
Drats! What on earth am I gonna do if I no longer have networking?!?<br />
<br />
Well, below is what you can do to get out of this precarious situation:<br />
<br />
<ol>
<li>Issue a <code>networkctl</code> to confirm that your Ethernet interface is still present. At this stage, it will probably only be listed as <code>enxa1b2c3...</code>, where <code>A1B2C3...</code> is your Pi's MAC address:<br /><pre class="brush: bash">root@pi ~ # networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 enxa1b2c3d4e5f6 ether off unmanaged
2 links listed.
</pre>
This is actually where the problem lies: The network interface isn't mapped to it's usual <code>eth0</code>, which in turn makes the networking boot scripts go <i>"Huh?"</i>...</li>
<li>Check that you can bring the interface up and down, to confirm that it isn't a hardware or kernel issue with the following:<br />
<pre class="brush: bash">root@pi ~ # ifconfig enxa1b2c3d4e5f6 up
[ 190.272495] smsc95xx 1-1.1:1.0 enxa1b2c3d4e5f6: hardware isn't capable of remote wakeup
[ 190.285729] IPv6: ADDRCONF(NETDEV_UP): enxa1b2c3d4e5f6: link is not ready
[ 191.851700] IPv6: ADDRCONF(NETDEV_CHANGE): enxa1b2c3d4e5f6: link becomes ready
[ 191.864838] smsc95xx 1-1.1:1.0 enxa1b2c3d4e5f6: link up, 100Mbps, full-duplex, lpa 0xCDE1
root@pi ~ # networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 enxa1b2c3d4e5f6 ether routable unmanaged
2 links listed.
root@pi ~ # ifconfig enxa1b2c3d4e5f6 down
[ 199.3354400] smsc95xx 1-1.1:1.0 enxa1b2c3d4e5f6: hardware isn't capable of remote wakeup</pre>
NB: Make sure you leave the interface <b>down</b> for the next steps.</li>
<li>Now, we should be able to get <code>eth0</code> going again by issuing this:<br /><pre class="brush: bash">root@pi ~ # ip link set enxa1b2c3d4e5f6 name eth0
[ 277.211063] smsc95xx 1-1.1:1.0 eth0: renamed from enxa1b2c3d4e5f6
root@pi ~ # systemctl restart networking
[ 300.952068] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup
[ 300.959844] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 302.475405] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 302.484821] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1</pre>
</li>
<li>A quick check with <code>ifconfig</code> should confirm that we're rolling again. <b>However</b> this is just a temporary solution, which won't persist after reboot. So we need something a bit more permanent, which is to create a <code>/etc/udev/rules.d/70-persistent-net.rules</code> (which is probably the one file that got screwed when you lost your network) and which should contain something like:<br /><pre class="brush: text">SUBSYSTEM=="net", ACTION=="add", DRIVERS=="smsc95xx", ATTR{address}=="*", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"</pre>
<br />If you add this file and reboot, you should find that everything's back in order again. Pfew, another crisis averted!
</li>
</ol>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com7tag:blogger.com,1999:blog-8361942945221983453.post-16363824603044483082016-01-07T12:21:00.001+00:002018-05-03T14:21:23.303+01:00Windows 10 N edition, MTP and EVRIf you have Windows 10 installed, you may have come into a stream of unexpected annoyances, such as being unable to access your Android device as an MTP device to copy files or the Enhanced Video Renderer (EVR) options not being offered as options in K-Lite Codec Pack's awesome Media Player.<br />
<br />
<a href="http://4.bp.blogspot.com/-B7IHfZQ2xNo/Vo5VhQ4-zAI/AAAAAAAAAOM/7womCLLNgcY/s1600/Image2.png" imageanchor="1"><img border="0" height="295" src="https://4.bp.blogspot.com/-B7IHfZQ2xNo/Vo5VhQ4-zAI/AAAAAAAAAOM/7womCLLNgcY/s320/Image2.png" width="320" /></a><br />
<br />
What gives? Wasn't Windows 10 supposed to make things <b>easier</b>?!?<br />
<br />
Well, as it turns out, if you happen to have the N version of Windows installed (which you can find by going to <i>Settings</i> → <i>System</i> → <i>About</i>), you are effectively using a version of Windows that is <b>crippled</b> and has quite a lot more functionality removed, than simply the front facing Windows Video Player.<br />
<br />
So off you head to the internet, where they tell you to install <span class="comment-copy">KB3010081 (the Media feature pack for Windows 10 N and Windows 10 KN editions)... except this doesn't work if you have the 1511 (Nov. 2015 update)!! </span><span class="comment-copy">Oh and you may also find out that, if you have the feature working before, the Nov. update broke it altogether.</span><br />
<span class="comment-copy"><br /></span>
<span class="comment-copy">That's because each update of Windows 10 requires its own specific Media Feature pack, which means that if you're using 1508, 1511, 1607, 1703, 1709 or 1803 (or upgraded to one of these versions) then you must install the corresponding pack <a href="https://support.microsoft.com/en-us/help/3145500/media-feature-pack-list-for-windows-n-editions" target="_blank">from this list</a>!</span><br />
<br />
<span class="comment-copy"><b>IMPORTANT NOTE</b>: It looks like Microsoft is taking its sweet time to update the list when they release a new version of Windows. However, the direct <a href="https://www.microsoft.com/en-us/software-download/mediafeaturepack" target="_blank">download page</a> may have the latest Media Feature Pack available before the list is updated.</span><br />
<br />
Sure there are a small notices on some of these to indicate that they might have been superseded, but one really has to wonder why Microsoft can't provide a proper update for the Media Feature pack..Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com6tag:blogger.com,1999:blog-8361942945221983453.post-7041368349528818282015-01-08T01:41:00.001+00:002016-04-20T23:19:07.661+01:00Easily create UEFI applications using Visual Studio 2015As <a href="http://pete.akeo.ie/2014/11/visual-studio-2013-has-now-become.html" target="_blank">pointed out before</a>, Visual Studio is now essentially free for all development, and its solid IDE of course makes is very desirable as the environment to use to develop UEFI applications on Windows.<br />
<br />
Now, you might have read that, short of using the oh-so-daunting <a href="http://tianocore.sourceforge.net/wiki/EDK2" target="_blank">EDK2</a>, and the intricate voodoo magic you'll have to spend <b>days</b> on, to make it play nice with the Visual Studio IDE, there is no salvation in the UEFI world. However, this couldn't be further from the truth.<br />
<br />
<b>Enters <a href="https://github.com/pbatard/uefi-simple" target="_blank">UEFI:SIMPLE</a>.</b><br />
<br />
The thing is, Visual Studio can already compile EFI applications <b>without having to rely on any external tools</b>, and even if you want an EDK2 like environment, with the common EFI API calls that it provides, you can totally do away with the super heavy installation and setup of the EDK, and instead use the lightweight and straightforward <a href="https://sourceforge.net/projects/gnu-efi/" target="_blank">GNU-EFI</a> library, that provides about the same level of functionality (as far as building standalone EFI applications or drivers are concerned, which is what we are interested in).<br />
<br />
So really, if you want to craft an EFI application in no time at all, all you need to do is:<br />
<br />
<ol>
<li><a href="http://www.visualstudio.com/products/visual-studio-community-vs" target="_blank">Install Visual Studio 2015</a>, which is totally free and which, no matter who you work for or what restrictions your corporate IT department tries to impose, you are 100% legally entitled to when it comes to trying to compile and test UEFI:SIMPLE.</li>
<li><i>As suggested by the Visual Studio installer</i>, install a git client such as <a href="http://msysgit.github.io/" target="_blank">msys-git</a> (or <a href="https://code.google.com/p/tortoisegit/" target="_blank">TortoiseGit</a> + msys-git). Now, you're going to wonder why, with git support being an integral part of Visual Studio 2015, we actually need an external client, but one problem is that Microsoft decided to strip their embedded git client of critical functionality, such as git submodule support, which we'll need.</li>
<li>Because you'd be a fool not to want to test your EFI application or driver in a virtual environment, and, thanks to QEMU, this is so exceedingly simple to achieve that UEFI:SIMPLE will do it for you, you should download and install<a href="http://www.qemu.org/" target="_blank"> QEMU</a>, preferably the 64 bit version (you can find a 64 bit qemu installer <a href="https://qemu.weilnetz.de/w64/">here</a>), and preferably to its default of <code>C:\Program Files\qemu</code>.</li>
<li>Clone the UEFI:SIMPLE git project, using the URI <code>https://github.com/pbatard/uefi-simple.git</code>. For this part, you can either use the embedded git client from Visual Studio or your external client.</li>
<li>Now, <b>using your external git client</b>, navigate to your uefi-simple directory and issue the following commands:<br /><pre><code>git submodule init
git submodule update</code></pre>
This will fetch the gnu-efi library source, which we rely on to build our application.</li>
<li>Open the solution file in Visual Studio and just click the "Local Windows Debugger" button to both compile <b>and run</b> our "Hello, World"-type application in QEMU.<br /> Through its <code>debug.vbs</code> script, which can be found under the "Resource File" category, the UEFI:SIMPLE solution will take of setting everything up for you, including downloading the <a href="http://tianocore.sourceforge.net/wiki/OVMF" target="_blank">OVMF UEFI firmware</a> for QEMU.<br /> Note that if you didn't install QEMU into <code>C:\Program Files\qemu\</code> you will need to edit <code>debug.vbs</code> to modify the path.</li>
<li>Finally, because the UEFI:SIMPLE source is public domain, you can now use it as a starting point to build your own UEFI application, whilst relying on the <a href="http://wiki.phoenix.com/wiki/index.php/EFI_BOOT_SERVICES" target="_blank">standard EFI API calls that one expects</a>, and, more importantly, with an easy way to test your module at your fingertips.</li>
</ol>
Oh and I should point out that UEFI:SIMPLE also has ARM support, and can also be compiled on Linux, or using MinGW if you don't want to use Visual Studio on Windows. Also, if you want real-life examples of fully fledged UEFI applications, that were built using UEFI:SIMPLE as their starting point, you should look no further than <a href="https://github.com/pbatard/efifs" target="_blank">efifs</a>, a project that builds whole slew of EFI file system drivers, or <a href="https://github.com/pbatard/uefi-ntfs" target="_blank">UEFI:NTFS</a>, which allows seamless EFI boot of NTFS partition.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com23tag:blogger.com,1999:blog-8361942945221983453.post-19140215263869671272014-11-18T00:36:00.000+00:002014-11-20T01:22:56.801+00:00Applying a series of Debian patches to an original sourceSay you have a nice original source package, as well as bunch of extra Debian patches, which you want to apply to that source (for instance, you may want to compile Debian's grub 2.00-22 using the tarballs you picked <a href="https://launchpad.net/ubuntu/+source/grub2/2.00-22">here</a>.<br />
<br />
However, since Debian uses quilt, or whatever it's called, to automate the application of a series of patches, and you either don't have it on your system or don't want to bother with it (since you're only interested in the patches), you end up wanting to apply all the files from the <code>patches</code> directory of the .debian addon, and there's of course no way you'll want to do that manually.<br />
<br />
The solution: Copy the <code>patches/</code> directory from the Debian addon to the root of your orig source, and run the following shell script.<br />
<br />
<pre class="brush:bash">#!/bin/bash
while read p; do
patch -p1 < ./patches/$p
done < ./patches/series</pre>
<br />
<i>By Grabthar's hammer, what a timesaver!</i>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-23672857547075956152014-11-16T20:20:00.002+00:002014-11-16T20:35:40.943+00:00Compiling Grub4DOS on MinGWSince <a href="https://github.com/chenall" target="_blank">chenall</a> committed a handful of patches that I submitted to make the compilation of <a href="https://github.com/chenall/grub4dos" target="_blank">Grub4DOS</a> on MinGW easier, I'm just going to jolt down some quick notes on how you can produce a working Grub4DOS on Windows.<br />
Note that part of this guide is shamelessly copied from the <a href="http://www.rmprepusb.com/tutorials/compile-grub4dos" target="_blank">RMPrepUSB Grub4DOS compilation guide</a>.<br />
<ul>
<li>If you don't already have a git client, download and install <b>msys-git</b> (a.k.a. "<i>Git for Windows</i>") from <a href="http://msysgit.github.io/" target="_blank">here</a>.</li>
<li>Download the latest <b>MinGW32 installer</b> (<code>mingw-get-setup.exe</code>) by clicking the "<i>Download Installer</i>" button on the top right corner of the <a href="http://www.mingw.org/" target="_blank">main MinGW site</a>.</li>
<li>Keep the default options on the first screen (but you can change the destination directory if you want)</li>
<li>On the package selection screen, select</li>
<ul>
<li><b>mingw-developer-toolkit</b></li>
<li><b>mingw-base</b></li>
<li><b>msys-base</b></li>
</ul>
<li>Select menu <i>Installation</i> → <i>Apply Changes</i> and click <i>Apply</i></li>
<li>Now navigate to your msys directory, e.g.. <code>C:\MinGW\msys\1.0\</code>, and open the file <code>etc\profile</code> in a text editor.</li>
<li>Assuming that you installed msys-git in <code>C:\Program Files (x86)\Git</code>, change the following:<pre class="brush: text">if [ $MSYSTEM == MINGW32 ]; then
export PATH=".:/usr/local/bin:/mingw/bin:/bin:$PATH"
else
export PATH=".:/usr/local/bin:/bin:/mingw/bin:$PATH"
fi</pre>
to<pre class="brush: text">if [ $MSYSTEM == MINGW32 ]; then
export PATH=".:/usr/local/bin:/mingw/bin:/bin:/c/Program Files (x86)/Git/bin:$PATH"
else
export PATH=".:/usr/local/bin:/bin:/mingw/bin:/c/Program Files (x86)/Git/bin:$PATH"
fi</pre>
This is to ensure that your system will be able to invoke git. Of course, if you use a different git client, you can ignore this step.</li>
<li>Download <b>nasm </b>(current build is: <a href="http://www.nasm.us/pub/nasm/releasebuilds/2.11.06/win32/nasm-2.11.06-win32.zip">http://www.nasm.us/pub/nasm/releasebuilds/2.11.06/win32/nasm-2.11.06-win32.zip</a>) extract and copy <code>nasm.exe</code> to <code>C:\MinGW\msys\1.0\bin</code> (the other files in the zip archive can be discarded).</li>
<li>Download <b>upx </b>(current build is: <a href="ftp://ftp.heanet.ie/mirrors/sourceforge/u/up/upx/upx/3.91/upx391w.zip">ftp://ftp.heanet.ie/mirrors/sourceforge/u/up/upx/upx/3.91/upx391w.zip</a>) extract and copy <code>upx.exe</code> to <code>C:\MinGW\msys\1.0\bin</code> (the other files in the zip archive can be discarded). </li>
<li>In <code>C:\MinGW\msys\1.0\</code> launch <code>msys.bat</code></li>
<li>In the shell that appears, issue the following command (this may be necessary to locate <code>mingw-get</code>):<pre class="brush: text">/postintall/pi.sh</pre>
You should accept all the default options.</li>
<li>Now issue the following commands:<pre class="brush: text">mingw-get upgrade gcc=4.6.2-1
mingw-get install mpc=0.8.1-1</pre>
This will effectively downgrade your compiler to gcc 4.6.2, which is necessary as gcc 4.7 or later doesn't seem to produce a working <code>grldr</code> for the time being.</li>
<li>Download the latest Grub4DOS source from github by issuing the following command<pre class="brush: text">git clone https://github.com/chenall/grub4dos.git</pre>
Note: By default this will download the source into <code>C:\MinGW\msys\1.0\home\<your_user_name>\grub4dos\</code>, but you can of course navigate to a different directory before issuing the <code>git clone</code> command if you want it elsewhere.</li>
<li>Run the following commands:<pre class="brush: text">cd grub4dos
./autogen.sh
make</pre>
</li>
</ul>
At the end of all this, you should end up with a <code>grldr</code> and <code>grldr.mbr</code> in the <code>C:\MinGW\msys\1.0\home\<your_user_name>\grub4dos\stage2\</code> directory, which is what you want<br />
<br />
<u>IMPORTANT</u>: Do not try to invoke <code>./configure</code> directly on MinGW, as compilation will fail. Instead should ensure that you call autotools to re-generate configure and Makefiles that MinGW will be happy with. Note that you can run <code>./bootstrap.sh</code> instead of <code>./autogen.sh</code>, if you don't want <code>configure</code> to be invoked with the default options.<br />
<h4>
What's the deal with gcc 4.7 or later on MinGW?</h4>
I haven't really investigated the issue, but the end result is that <code>grldr</code> is 303 KB, vs 307 KB for gcc 4.6.2, and freezes at boot after displaying:<br />
<pre class="brush: text">A20 Debug: C806 Done! ...</pre>
<h4>
I'm getting an error about objcopy during the configure test... </h4>
That's because you're not listening to what I say and try to compile a version of Grub4DOS that doesn't contain the necessary updates for MinGW. You must use a version of the source that's more recent than 2014.11.14 and right now, that source is only available if you clone from git.<br />
<h4>
Dude, could you, like, also provide the steps to compile from Linux?</h4>
Sigh... Alright, since I'm a nice guy, and it's a lot simpler, I'll give you the steps for a bare Debian 7.7.0 x64 Linux setup:<br />
<pre class="brush: text">aptitude install gcc glibc-devel.i686 gcc-multilib make autotools autoconf git nasm upx
git clone https://github.com/chenall/grub4dos.git
cd grub4dos
./autogen.sh
make</pre>
Happy now? Note that the Linux compiled version is usually a lot smaller than the MinGW32 compiled one.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com2tag:blogger.com,1999:blog-8361942945221983453.post-26026990507021480222014-11-13T22:34:00.001+00:002017-02-06T14:20:39.683+00:00Visual Studio 2013 has now become essentially free...See <b><a href="http://www.visualstudio.com/products/visual-studio-community-vs">http://www.visualstudio.com/products/visual-studio-community-vs</a></b>.<br />
<br />
I'm just going to point out to the first 2 paragraph of the license terms:<br />
<blockquote class="tr_bq">
<i>
</i>
<div class="MsoNormal" style="line-height: normal; margin-bottom: 6.0pt; margin-left: .25in; margin-right: 0in; margin-top: 6.0pt; text-autospace: none; text-indent: -.25in;">
<i><b><span style="font-family: "tahoma" , "sans-serif"; font-size: 10.0pt;">1. </span></b> <b><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">INSTALLATION AND USE RIGHTS.</span></b></i></div>
<i>
<div class="MsoNormal" style="line-height: normal; margin-bottom: 6.0pt; margin-left: .5in; margin-right: 0in; margin-top: 6.0pt; text-autospace: none; text-indent: -.25in;">
<b><span style="font-family: "tahoma" , "sans-serif"; font-size: 10.0pt;">a. </span></b> <b><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">Individual license.</span></b> <span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">If you are an individual working on your own applications to sell or for any other purpose, you may use the software to develop and test those
applications.</span></div>
<div class="MsoNormal" style="line-height: normal; margin-bottom: 6.0pt; margin-left: .5in; margin-right: 0in; margin-top: 6.0pt; text-autospace: none; text-indent: -.25in;">
<b><span style="font-family: "tahoma" , "sans-serif"; font-size: 10.0pt;">b. </span></b> <b><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">Organization licenses.</span></b> <span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">If you are an organization, your users may use the software as follows:</span></div>
<ul>
<li><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">Any number of your users may use the software to develop and test your
applications released under Open Source Institute (OSI)-approved open source software licenses.</span></li>
<li><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">Any number of your users may use the software to develop and test your
applications as part of online or in person classroom training and education, or for performing academic research.</span></li>
<li><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">If none of the above apply, and you are also not an enterprise (defined below), then up to 5 of your individual users can use the software concurrently to develop and test your applications.</span></li>
<li><span style="font-family: "tahoma" , "sans-serif"; font-size: 9.5pt;">If you are an enterprise, your employees and contractors may not use the software to develop or test your applications, except for open source and education purposes as permitted above. An “enterprise” is any
organization and its affiliates who collectively have either (a) more than 250 PCs or users <u>or</u> (b) more than one million US dollars (or the equivalent in other currencies) in annual revenues, and “affiliates” means those entities that control (via majority ownership), are controlled by, or are under common control with an organization.</span></li>
</ul>
</i>
</blockquote>
Basically, this means that even if you're a corporate user, you can legally install and use Visual Studio Community Edition, on any computer you want, to compile and/or contribute to Open Source projects, and this regardless of your company's internal policies regarding the installation of Software (otherwise any company could enact an internal policy such as "Microsoft software licenses don't apply here" to be <i>entitled</i> to install as many unlicensed copies of Windows as they like).<br />
So I have to stress this out very vehemently: If a company or IT department tries to take your right to download and install Visual Studio 2013 Community Edition to compile or test Open Source projects, <u>THEY ARE IN BREACH OF THE LAW</u>!<br />
The only case where you are not entitled to use Visual Studio Community Edition is if you're developing a <b>closed source</b> application for a company. But who in their right mind would ever want to do something like that anyway?... ;)<br />
<br />
So all of a sudden, you no longer have to jump through hoops if you want to recompile, debug and contribute to <a href="https://rufus.akeo.ie/" target="_blank">rufus</a>, <a href="http://libusb.info/" target="_blank">libusb</a> or <a href="http://libwdi.akeo.ie/" target="_blank">libwdi</a>/<a href="http://zadig.akeo.ie/">Zadig</a> - simply install Visual Studio 2013, <b>as you are fully entitled to </b>(because all these projects use an OSI approved Open Source license), and get going!<br />
<br />
Oh, and for the record, if you want to keep a copy of Visual Studio 2013 Community Edition, for offline installation, you should run the installer as:<br />
<pre class="brush: text">vs_community.exe /layout</pre>
Note however that this will send you back 8 GB in terms of download size and disk space.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com20tag:blogger.com,1999:blog-8361942945221983453.post-55869173640076603552014-10-08T19:14:00.001+01:002015-02-16T12:21:24.045+00:00Free SSL certificate for Open Source projectsJust going to point out that <a href="https://www.globalsign.com/" target="_blank">GlobalSign</a> are currently offering a 1 year SSL certificate for Open Source projects <b><a href="https://www.globalsign.com/ssl/ssl-open-source/" target="_blank">for free</a></b>.<br />
<br />
Alas, this is only for a specific domain name, such as <code>app.project.org</code>, rather than for a wildcard domain, such as <code>*.project.org</code>, and at this stage, I'm not entirely sure if the certificate is also renewable for free after one year. But at least, this now allows me to offer access to Rufus from <a href="https://rufus.akeo.ie/" target="_blank">https://rufus.akeo.ie</a>.<br />
<br />
Oh, and once your site is set for SSL, you probably want to ensure that it is properly configured by running it through Qualys SSL Labs' excellent <a href="https://www.ssllabs.com/ssltest/" target="_blank">SSL analysis tool</a>.<br />
<br />
And I'm just going to jolt down that, to get a <a href="https://www.ssllabs.com/ssltest/analyze.html?d=rufus.akeo.ie" target="_blank">proper grade</a> with Apache, you may have to edit your <code>/etc/apache2/mods-enabled/ssl.conf</code> and set the following:<br />
<pre class="brush: text">SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCipherSuite EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4</pre>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-58227462182089580762014-09-29T23:28:00.000+01:002014-09-29T23:28:34.844+01:00Getting proper coloured directory listing with Debian and PuttySince I keep having to do that:<br />
<br />
<ol>
<li>In putty, in the Colours setting tab for your connection, make sure that "Indicate bolded text by changing" is set to "The colour" and <b>not</b> "The font"</li>
<li>In Debian's bashrc, uncomment the line:<br /><code>force_color_prompt=yes</code></li>
</ol>
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com0tag:blogger.com,1999:blog-8361942945221983453.post-75406545667404466092014-06-30T01:27:00.002+01:002018-11-01T19:58:17.553+00:00So I built an NTFS EFI driver...It's <a href="http://en.wikipedia.org/wiki/Free_software">Free Software</a> of course, and it only took me about two weeks to do so.<br />
<br />
Since I've been doing it in my <b>limited</b> spare time, I might as well brag about it and say that, had I been able to work on this full time (which I sure wouldn't mind), it probably wouldn't have taken more than 7 days... Can't help but wonder how much a proprietary/non-free software development workflow would have had to budget, or outsource, to try to achieve the same thing, within the same amount of time.<br />
<br />
At the very least, this demonstrates that, if you start with the the right resource set and, more importantly, if you stop being irrational about how <i>"using the GPLv3 means the death knell of your software revenue stream"</i>, a project such as this one can easily and cheaply be completed in a matter of <b>days</b>.<br />
<br />
<br />
Anyway, the driver itself is read-only (which is all I need for <a href="http://rufus.akeo.ie/" target="_blank">Rufus</a>, as my intent is to use it <a href="https://github.com/pbatard/uefi-ntfs">there</a>) and it could probably use some more polishing/cleanup, but it is stable enough to be used right now.<br />
<br />
So, if you are interested in a redistributable and 100% Free Software <b>read-only NTFS EFI driver</b>, you should visit:<br />
<b><a href="https://efi.akeo.ie/">https://efi.akeo.ie</a></b> (the link includes pre-built binaries).<br />
<br />
Alternatively, you can also visit the github project page at:<br />
<a href="https://github.com/pbatard/efifs">https://github.com/pbatard/efifs</a><br />
<br />
Now, I'd be ungrateful if I didn't mention that the main reason I was able to get something off the ground this quickly is thanks to the awesome developers behind the <a href="http://www.gnu.org/software/grub/" target="_blank">GRUB 2.0 project</a>, who abstracted their file system framework enough, to make reusing their code in an EFI implementation fairly straightforward.<br />
And I also have to thank the <a href="http://ipxe.org/" target="_blank">iPXE</a> developers, who did most of the back-breaking work in figuring out a GPL friendly version of an EFI FS driver, that I could build on.<br />
Finally, I was also able to reuse some of the good work from the <a href="http://www.rodsbooks.com/refind/" target="_blank">rEFInd</a> people (the GPLv3 compatible parts), which was big help!<br />
<br />
But the lesson is: Don't waste your time with proprietary/non-free software. If you are both interested in being productive and budget-conscious, <a href="https://www.fsf.org/about/what-is-free-software">Free Software</a> is where it's at!Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com11tag:blogger.com,1999:blog-8361942945221983453.post-45375154200717817072014-05-31T00:29:00.001+01:002014-05-31T00:29:20.488+01:00Restoring EFI NVRAM boot entries with rEFInd and RufusSo, you reinstalled Windows, and it somehow screwed the nice EFI entry you had that booted your meticulously crafted EFI system partition? You know, the one you use with rEFInd or ELILO or whatever, to multiboot Linux, Windows, etc., and that has other goodies such as the EFI shell...<br />
<br />
Well, here's how you can sort yourself out (shamelessly adapted from the <a href="https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface#bcfg" target="_blank">always awesome and extremely comprehensive Arch Linux documentation</a>):<br />
<ul>
<li>Download the latest rEFInd CD-R image from <a href="http://www.rodsbooks.com/refind/getting.html" target="_blank">here</a>.</li>
<li>Extract the ISO and use <a href="http://rufus.akeo.ie/" target="_blank">Rufus</a> to create a bootable USB drive. Make sure that, when you create the USB, you have "<i>GPT partition scheme for UEFI computer</i>" selected, under "Partition scheme and target system type".</li>
<li>Boot your computer in UEFI mode, and enter the EFI BIOS to select the USB as your boot device</li>
<li>On the rEFInd screen select "Start EFI shell".</li>
<li>At the <code>2.0 Shell ></code> prompt type:<br /><pre class="brush: text">bcfg boot dump</pre><br />
This should confirm that some of your old entries have been unceremoniously wiped out by Windows.</li>
<li>Find the disk on which your old EFI partition resides by issuing something like:<br /><pre class="brush: text">dir fs0:\efi</pre><br />
NB: you can use the <code>map</code> command to get a list of all the disks and partitions detected during boot.</li>
<li>Once you have the proper <code>fs#</code> information (and provided you want to add an entry that boots into a rEFInd installed on your EFI system partition under <code>EFI\refind\refind_x64.efi</code>), issue something like:<br /><pre class="brush: text">bcfg boot add 0 fs0:\EFI\refind\refind_x64.efi rEFInd</pre>
Note: If needed you can also use <code>bcfg boot rm #</code> to remove existing entries.</li>
<li>Confirm that your entry has been properly installed as the first option, by re-issuing <code>bcfg boot dump</code>. Then remove the USB, reset your machine, and you should find that everything is back to normal.</li>
</ul>
NOTE: Make sure you use the latest rEFInd if you want an EFI shell that includes bcfg. Not all EFI shells will contain that command!
Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com2tag:blogger.com,1999:blog-8361942945221983453.post-68150075635297076632014-05-30T00:12:00.002+01:002018-11-01T20:00:27.253+00:00RTF - Where's the FM?I mean a hands-on manual on how to create an Rich Text Format file from scratch, not the <a href="http://www.microsoft.com/en-us/download/confirmation.aspx?id=7105" target="_blank">friggin' 200 pages specs</a>! Plus, only Microsoft would provide a 200 pages Word Document as an executable... Oh well, it's not like I never saw IBM (or was it Intel?) providing some source code as a PDF file <b>with</b> page numbering.<br />
<br />
Man, what a struggle to figure out how to get Arabic RTF content to properly display in an app's Rich Edit control.<br />
<br />
If you try to be smart and have Wordpad produce your RTF for you, and even if you set your Arabic text to use an Unicode font, you end up with something like:<br />
<br />
<pre class="brush: text">{\rtf1 ... {\fonttbl{\f0\fnil\fcharset0 Courier New;}{\f1\fnil\fcharset178 @Arial Unicode MS;}}
\pard\ltrpar\f0 Some blurb\f1\rtlch\lang1025\'da\'e3\'d1 \'c7\'e1\'d5\'e3\'cf\b0\f0\ltrch\lang6153\par
}</pre>
...which results in UTTER GARBAGE on screen in place of the Arabic!<br />
<br />
I can't help but ask: what <b>is</b> the point of using an Unicode font, really, if that insanely dumb word processor that is Wordpad still insists on living in the 1980s, and switches codepages to insert CP-Whatever codepoints instead?<br />
<br />
So here's what you actually want to do, manually:<br />
<ul>
<li>remove the <code>\lang</code> switch</li>
<li>insert pure Unicode codepoints using <code>\u</code>
</li>
</ul>
But of course, it wouldn't be as backwards as possible if Microsoft didn't also <b>force</b> you to specify Unicode codepoints in decimal, with no means whatsoever of specifying hex instead. So even if you know the Arabic UTF-16 sequence you want to insert, you will have to spend some time doing your decimal conversions, to, at last, get the <b>properly working</b>:<br />
<br />
<pre class="brush: text">{\rtf1 ... {\fonttbl{\f0\fnil\fcharset0 Courier New;}{\f1\fnil\fcharset178 @Arial Unicode MS;}}
\pard\ltrpar\f0 Some blurb\f1\rtlch\u1575?\u1604?\u1589?\u1605?\u1583? \u1593?\u1605?\u1585?\ltrch\f0\
}</pre>
<br />
Heed my advice: If you design your format around the idea that no human will ever need to edit some data in a hurry in it, you're designing it all wrong...<br />
<br />
As an aside, the above is also the reason why little-endian is an utter abomination that should be banned from the face of this earth: If I'm in a computer-controlled commercial airplane, that's lost all input, and, on account of the ground approaching fast, I'm in a bit of a hurry to figure out from a memory dump where the automatic pilot might store its altitude, to manually alter it, you bet that I'm gonna hope that whoever designed that plane picked a big-endian CPU, to slightly increase the probability of myself and all the other passengers not ending up as a pancake...<br />
<br />
First rule of designing anything is to design with the idea that humans will <b>always</b> need to interact with your stuff, in ways that you'll never be able to devise.<br />
<br />
So, Microsoft, next time you want to design something like RTF, please RTFM of Design rules and try to make it just a bit easier on people who need to manually interact with your stuff...Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com1tag:blogger.com,1999:blog-8361942945221983453.post-37846867199562856072014-05-29T02:29:00.005+01:002021-12-20T19:50:34.721+00:00Compiling and installing Grub2 for standalone USB bootThe goal here, is to produce the necessary set of files, <b>to be written to an USB Flash Drive using dd</b> (rather than using the Grub installer), so that it will boot through Grub 2.x and be able to process an existing <code>grub.cfg</code> that sits there.<br />
<br />
As usual, we start from nothing. I'll also assume that you know nothing about the intricacies of Grub 2 with regards to the creation of a bootable USB, so let me start with a couple of primers:<br />
<br />
<ol>
<li>For a BIOS/USB boot, Grub 2 basically works on the principle of a standard MBR (<code>boot.img</code>), that calls a custom second stage (<code>core.img</code>), which usually sits right after the MBR (sector 1, or <code>0x200</code> on the UFD) and which is a flat compressed image containing the Grub 2 kernel plus a user hand-picked set of modules (<code>.mod</code>).<br /> These modules, which get added to the base kernel, should usually limit themselves to the ones required to access the set of file systems you want Grub to be able to read a config file from and load more individual modules (some of which need to be loaded to parse the config, such as <code>normal.mod</code> or <code>terminal.mod</code>). <br />As you may expect, the modules you embed with the Grub kernel and the modules you load from the target filesystem are exactly the same, so you have some choice on whether to add them to the core image or load them from the filesystem.<br /> <br /></li>
<li>You most certainly do NOT want to use the automated Grub installer in order to boot an UFD. This is because the Grub installer is designed to try to boot the OS it is running from, rather than try to boot a random target in generic fashion. Thus, if you try to follow the myriad of quick Grub 2 guides you'll find floating around, you'll end up nowhere in terms of booting a FAT or NTFS USB Flash Drive, that should be isolated of everything else.</li>
</ol>
With the above in mind, it's time to get our hands dirty. Today, I'm going to use Linux, because my attempts to try to build the latest Grub 2 using either MinGW32 or cygwin failed miserably (crypto compilation issue for MinGW, Python issue for cygwin on top of the usual CRLF annoyances for shell scripts due to the lack of a .gitattributes). I sure wish I had the time to produce a set of fixes for Grub guys, but right now, that ain't gonna happen ⇒ Linux is is.<br />
<br />
First step is to pick up the latest source, and, since we like living on the edge, we'll be using git rather than a release tarball:<br />
<br />
<pre class="brush: shell">git clone git://git.savannah.gnu.org/grub.git</pre>
<br />
Then, we bootstrap and attempt to configure for the smallest image size possible, by disabling NLS (which I had hoped would remove anything gettext but turns out not to be the case - see below).<br />
<br />
<pre class="brush: shell">cd grub
./autogen.sh
./configure --disable-nls
make -j2</pre>
<br />
After a few minutes, your compilation should succeed, and you should find that in the <code>grub-core/</code> directory, you have a <code>boot.img</code>, <code>kernel.img</code> as well as a bunch of modules (<code>.mod</code>).<br />
<br />
As explained above, <code>boot.img</code> is really our MBR, so that's good, but we're still missing the bunch of sectors we need to write right after that, that are meant to come from a <code>core.img</code> file.<br />
<br />
The reason we don't have a <code>core.img</code> yet is because it is generated dynamically, and we need to tell Grub exactly what modules we want in there, as well as the disk location we want the kernel to look for additional modules and config files. To do just that, we need to use the Grub utility <code>grub-mkimage</code>. <br />
<br />
Now that last part (telling grub that it should look at the USB generically and in isolation, and not give a damn about our current OS or disk setup) is what nobody on the Internet seems to have the foggiest clue about, so here goes: We'll want to tell Grub to use BIOS/MBR mode (not UEFI/GPT) and that we'll have one MBR partition on our UFD containing the boot data that's not included in <code>boot.img</code>/<code>core.img</code> and that it may need to proceed. And with BIOS setting our bootable UFD as the first disk (whatever gets booted is usually the first disk BIOS will list), we should tell Grub that our disk target is <code>hd0</code>. Furthermore, the first MBR partition on this drive will be identified as <code>msdos1</code> (Grub calls MBR-like partitions <code>msdos#</code>, and GPT partitions <code>gpt#</code>, with the index starting at <b><code>1</code></b>, rather than <b><code>0</code></b> as is the case for disks). <br />
<br />
Thus, if we want to tell Grub that it needs to look for the first MBR partition on our bootable UFD device, we must specify <code>(hd0,msdos1)</code> as the root for our target.<br />
With this being sorted, the only hard part remaining is figure out the basic modules we need, so that Grub has the ability to actually identify and read stuff on a partition that may be FAT, NTFS or exFAT. To cut a long story short, you'll need at least <code>biosdisk</code> and <code>part_msdos</code>, and then a module for each type of filesystem you want to be able to access. Hence the complete command:<br />
<br />
<pre class="brush: shell">cd grub-core/
../grub-mkimage -v -O i386-pc -d. -p\(hd0,msdos1\)/boot/grub biosdisk part_msdos fat ntfs exfat -o core.img</pre>
<br />
NB: If you want to know what the other options are for, just run <code>../grub-mkimage --help</code><br />
Obviously, you could go crazy adding more file systems, but the one thing you want to pay attention is the size of <code>core.img</code>. That's because if you want to keep it safe and stay compatible with the largest choice of disk partitioning tools, you sure want to have <code>core.img</code> below 32KB - 512 bytes. The reason is there still exists a bunch of partitioning utilities out there that default to creating their first partition on the second "track" of the disk. And for most modern disks, including flash drives, a track will be exactly 64 sectors. What this all means is, if you don't want to harbour the possibility of overflowing <code>core.img</code> onto your partition data, you really don't want it to be larger than <code>32256</code> or <code>0x7E00</code> bytes.<br />
OK, so now that we have <code>core.img</code>, it's probably a good idea to create a single partition on our UFD (May I suggest using <a href="https://www.blogger.com/rufus.akeo.ie">Rufus</a> to do just that? ;)) and format it to either FAT/FAT32, NTFS or exFAT.<br />
<br />
Once this is done, we can flat-copy both the MBR, a.k.a. <code>boot.img</code>, and <code>core.img</code> onto those first sectors. The one thing you want to pay attention to here is, while copying <code>core.img</code> is no sweat, because we can just use a regular 512 byte sector size, for the MBR, you need to make sure that <b>only</b> the first 440 bytes of <code>boot.img</code> are copied, so as not to overwrite the partition data and the disk signature that also resides in the MBR and that has already been filled. So please pay close attention to the <code>bs</code> values below:<br />
<br />
<pre class="brush: shell">dd if=boot.img of=/dev/sdb bs=440 count=1
dd if=core.img of=/dev/sdb bs=512 seek=1 # seek=1 skips the first block (MBR)</pre>
<br />
Side note: Of course, instead of using plain old <code>dd</code>, one could have used Grub's custom <code>grub-bios-setup</code> like this:<br />
<br />
<pre class="brush: shell">../grub-bios-setup -d. -b ./boot.img -c ./core.img /dev/sdb</pre>
<br />
However, the whole point of this little post is to figure out a way to add Grub 2 support to Rufus, in which we'll have to do the copying of the <code>img</code> files without being able to rely on external tools. Thus I'd rather demonstrate that a <code>dd</code> copy works just as good as the Grub tool for this.<br />
After having run the above, you may think that all that's left is copying a <code>grub.cfg</code> to <code>/boot/grub/</code> onto your USB device, and watch the magic happen... but you'll be wrong.<br />
<br />
Before you can even think about loading a <code>grub.cfg</code>, and at the very least, Grub <b>MUST</b> have loaded the following modules (which you'll find in your <code>grub-core/</code> directory and that need to be copied on the target into a <code>/boot/grub/<b>i386-pc</b>/</code> folder):<br />
<ul><code><code>
<li><code>boot.mod</code></li>
<li><code>bufio.mod</code></li>
<li><code>crypto.mod</code></li>
<li><code>extcmd.mod</code></li>
<li><code>gettext.mod</code></li>
<li><code>normal.mod</code></li>
<li><code>terminal.mod</code></li>
</code></code></ul>
As to why the heck we still need <code>gettext.mod</code>, when we made sure we disabled NLS, and also why we must have <code>crypto</code>, when most usages of Grub don't care about it, your guess is as good as mine...<br />
<br />
Finally, to confirm that everything works, you can add <code>echo.mod</code> to the list above, and create a <code>/boot/grub/grub.cfg</code> on your target with the following:<br />
<br />
<pre class="brush: text">insmod echo
set timeout=5
menuentry "test" {
echo "hello"
}</pre>
<br />
Try it, and you should find that your Grub 2 config is executing at long last, whether your target filesystem in FAT, NTFS or exFAT, and you can now build custom bootable Grub 2 USBs on top of that. Isn't that nice?<br />
<br />
<b>FINAL NOTE:</b> In case you're using this to try boot an existing Grub 2 based ISO from USB (say <a href="http://aros.sourceforge.net/">Aros</a>), be mindful that, since we are using the very latest Grub code, there is a chance that the modules from the ISO and the kernel we use in core may have some incompatibility. Especially, you may run into the obnoxious:<br />
<br />
<pre class="brush: text">error: symbol 'grub_isprint' not found.</pre>
<br />
What this basically means is that there is a mismatch between your Grub 2 kernel version and Grub 2 module. To fix that you will need to use kernel and modules from the same source.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com9tag:blogger.com,1999:blog-8361942945221983453.post-7041063053622171312014-01-14T02:30:00.002+00:002018-10-15T18:57:22.127+01:00Using PHP-Gettext to localize your web pagesThis is what I am now using for the <a href="https://rufus.akeo.ie/" target="_blank">Rufus Homepage</a>. As usual, it took way too long to find all the pieces needed to solve this specific problem, so I'm going to write a guide that has them all in a single place.<br />
<br />
<h3>
What we want:</h3>
<ol>
<li>A web page that detects the language from the browser, and, if a translation exists, displays that translation. If not, it falls back to the English version.</li>
<li>A menu somewhere, that lets users pick from a list of supported languages, independently of the one set by their browser.</li>
<li>An easy to use process for translators, that relies on the well known tools of the trade (i.e. gettext and Poedit).</li>
<li>All of the above in a <b>single</b> web page, so that can we keep all the common parts together, and don't have to duplicate changes.</li>
</ol>
<h3>
<br />Where we start:</h3>
<ul>
<li>A web server that we control fully, and that natively supports UTF-8. I'll only say this once: In 2014, if you still don't use UTF-8 everywhere you can, then you don't deserve to host a web page, let alone administer a web server.</li>
<li>An single <code>index.html</code> page, in English/UTF-8, that contains pure HTML (possibly with a little sprinkling of JavaScript, but not much else).</li>
</ul>
Aaaand, that's about it really.<br />
<br />
<h3>
<br />Prerequisites:</h3>
Because we have complete control of the server, we're going to use <a href="http://ie2.php.net/gettext" target="_blank">PHP Gettext</a>.<br />
Why? Because it relies on <a href="http://www.gnu.org/software/gettext/" target="_blank">gettext</a>, which is a mature translation framework, with solid support (including a nice GUI translation application for Windows & Mac called <a href="http://www.poedit.net/" target="_blank">Poedit</a>) and also because <a href="http://mel.melaxis.com/devblog/2006/04/10/benchmarking-php-localization-is-gettext-fast-enough/" target="_blank">the performance hit of using PHP Gettext seems to be minimal compared to the alternatives</a>. Finally, using PHP gives us the ability to simply edit our existing HTML and insert PHP code wherever we need a translation, which should make the whole process a breeze.<br />
<br />
Thus, the first two items you need to install on your server then, if you don't have them already, will be PHP (preferably v5 or later) as well as <code>php-gettext</code>, plus all dependencies those two packages may have.<br />
<br />
Then, you will need to install is <code>php5-intl</code>, so that we can use the <code>locale_accept_from_http()</code> function call to detect the browser locale from our visitors.<br />
<br />
Finally, you <b>must</b> ensure that your server serves ALL the locales you are planning to support, in UTF-8. Especially, issuing <code>locale -a | grep utf8</code> on your server must return AN AWFUL LOT of entries (on mine, I get more than 150 of them, and that is the way it should be).<br />
If issuing <code>locale -a | grep utf8 | wc -l</code> returns less than 100 entries, then, unless you are planning to restrict your site to only a small part of the world, you will need to first sort that out, for instance by installing the <code>locales-all</code> package. This is because gettext will not support a locale that is unknown to the system. For instance, if you don't see <code>fr_CA.utf8</code> listed in your <code>locale -a</code>, then no matter what you do, even if you have other French locales listed, gettext will not know how to handle browsers that are set to Canadian French. You have been warned!<br />
<br />
<h3>
<br />Testing PHP gettext support:</h3>
At this stage, I will assume that you have <code>php5</code>, <code>php5-intl</code>, <code>php-gettext</code> and possibly other dependencies such as <code>libapache2-mod-php5</code>, <code>gettext</code> and co. installed. If you are using Apache2, you may also have to enable the PHP5 module, by symlinking <code>php5.conf</code> and <code>php5.load</code> in your <code>/etc/apache2/mods-enabled/</code>, and possibly edit <code>php5.conf</code> to allow running PHP scripts in user directories (which is disabled by default).<br />
<br />
The first thing we'll do, to check that everything is in order before starting with localization, is simply create an <code>info.php</code>, at the same location where you have your <code>index.html</code>, and that contains the following one liner:<br />
<pre class="brush: php"><? phpinfo(); ?></pre>
<br />
Now, you should navigate to <code><your_website>/info.php</code> and confirm that:<br />
<ol>
<li>You get a whole bunch of PHP information from your server</li>
<li>In this whole set of data, you see a line stating "GetText Support: enabled"</li>
</ol>
If you don't see any of the above, then you will need to sort your PHP settings before proceeding, as everything that follows relies on having at least the above working. For one, we want to confirm that both PHP and the short script form (<code><?</code> rather than <code><?php</code>), which is what we'll use in the code below, are working, and also, get some assurance that gettext is enabled. So make sure to edit your <code>php.ini</code> or conf settings, if you need to sort things out.<br />
<br />
Once you got the above simple test going, you should delete that <code>info.php</code> file, as you don't want attackers to know too much about the PHP and server settings you're running under.
<br />
<h3>
<br />Let's get crackin'</h3>
<br />
With PHP now confirmed working, let's set our translation rolling with PHP-Gettext. For that I'm going to loosely follow <a href="http://mel.melaxis.com/devblog/2005/08/06/localizing-php-web-sites-using-gettext/" target="_blank">this guide</a>. I say loosely, because I found that it was woefully incomplete and left out the most crucial parts.<br />
<ol>
<li>Start by duplicate your existing <code>index.html</code> as <code>index2.php</code>. This will enable us to work on adding translations to <code>index2.php</code> without interfering with the existing site, until we're happy enough that we can replace <code>index.html</code> altogether. Of course we picked <code>index2.php</code> rather than <code>index.php</code>, to make sure our server doesn't try to serve the file we're testing over the live index.html that's assumed to already exist in that directory.<br />
<br />
</li>
<li>In <code>index2.php</code>, and provided you want to test a French translation (you don't really have to speak French if you just want to test that things work), somewhere after the initial <code><html></code> tag, add the following PHP header:<br />
<br />
<pre class="brush: php"><?
$langs = array(
'en_US' => array('en', 'English (International)'),
'fr_FR' => array('fr', 'French (Français)'),
);
$locale = "en_US";
if (isset($_SERVER["HTTP_ACCEPT_LANGUAGE"]))
$locale = locale_accept_from_http($_SERVER["HTTP_ACCEPT_LANGUAGE"]);
if (isSet($_GET["locale"])) {
$locale = $_GET["locale"];
}
$locale = preg_replace("/[^a-zA-Z_]/", "", substr($locale,0,5));
foreach($langs as $code => $lang) {
if(substr($locale,0,strlen($lang[0])) == $lang[0]) {
$locale = $code;
break;
}
}
// Must append ".utf8" suffix here, else languages such as Azerbaijani won't work
setlocale(LC_MESSAGES, $locale . ".utf8");
// Also set the LANGUAGE variable, which may be needed on some systems
putenv("LANGUAGE=" . $locale);
bindtextdomain("index", "./locale");
bind_textdomain_codeset("index", "UTF-8");
textdomain("index");
?></pre>
<br />
What this code does is:<ul>
<li>Create an array of languages that we will support from the language selection menu (here English and French). You'll notice that this is actually an array of arrays, but more about this later.</li>
<li>After setting the default to English, read the preferred locale from the browser, if <code>HTTP_ACCEPT_LANGUAGE</code> is defined (<code>isset(...)</code>), using <code>locale_accept_from_http()</code>. If that locale is not overridden with a <code>?locale=</code> parameter passed on the URL, it's the one that will be used throughout the rest of the file.</li>
<li>Find if a <code>locale</code> parameter was passed on the URL and set the <code>$locale</code> variable to it if that's the case.</li>
<li>Sanitize the locale parameter to ensure that it only contains only alphabetical or underscore, and is no more than 5 characters long (anything that can be entered by users must be considered potentially harmful and SHOULD BE SANITIZED!).</li>
<li>Ensure that if we get a short locale (eg. <code>fr</code> rather than <code>fr_FR</code>), or if we get a locale for a language we support, but for a region that we don't (eg. <code>fr_CA</code>), we convert it to the closest <code>locale_REGION</code> form we support. This is very important, as the browser may only provide us with <code>fr</code> or <code>fr_CA</code> when invoking <code>locale_accept_from_http</code> and want to have these locales mapped to <code>fr_FR</code> for subsequent processing.</li>
<li>Tell gettext that it should use UTF-8 and look for <code>index.mo</code> in a <code>./locale/<LOCALE>/LC_MESSAGES/</code> for translations (eg. <code>./locale/fr/LC_MESSAGES/index.mo</code>).</li>
</ul>
<br />
</li>
<li>Somewhere in a <code>div</code> (eg. the one for a right sidebar) add the following code for the language selection menu:<br />
<br />
<pre class="brush: php"><select onchange="self.location='?locale='+this.options[this.selectedIndex].value">
<? foreach($langs as $code => $lang): ?>
<option &lt? if(substr($locale,0,strlen($lang[0])) == $lang[0]) echo "selected=\"selected\"";?> value="<?= $code;?>">
<?= $lang[1]; ?>
</option>
<? endforeach; ?>
</select></pre>
<br />
What this code does is:<ul>
<li>Create a dropdown with all the languages from our <code>$langs</code> array.</li>
<li>Check out if the first characters of our <code>$locale</code> matches the short language code from our array, and set the dropdown entry as the selected one if that is the case. This ensures that "French" will be selected in our dropdown, regardless of whether the locale is <code>fr_CA</code>, <code>fr_FR</code> or any of the other <code>fr_XX</code> locales.</li>
<li>When a user selects an entry from the dropdown, add a <code>?locale=en_US</code> or <code>?locale=fr_FR</code> to the URL, to force the page to be refreshed using that language.</li>
</ul>
<br />
</li>
<li>For every place where you want to translate a string, use something like <code><?= _("Hello, world");?></code>, where <code><?=</code> is the short version of <code><?php echo</code> and <code>_(</code> is the actual call to gettext. What gettext does then is, find out if a translation exists for the string being passed as parameter and either use that if it exists, or the original untranslated string otherwise.<br />
<br />
</li>
<li>Of course, you can use the whole gamut of PHP function calls, and say, if you want to insert a variable in your translated string, such as a date, do something like:<br /><code><? printf(_("Last updated %s:"), $last_date);?></code>.<br /> Also, if needed, and this is something that is <b>very useful to know</b>, you can insert translator notes using comments (<code>/* ... */</code> within your PHP, before the <code>_(...)</code> calls. These comments will then be displayed for all translators to see in Poedit (as long as you used the <code>-c</code> option when creating your PO catalog with <code>xgettext</code>).<br />
<br />
</li>
<li>Save your <code>index2.php</code> and confirm that you get to see the English strings, the dropdown with 2 entries, as well as <code>?locale=fr_FR</code> or <code>?locale=en_US</code> appended to the URL when you select an entry from the dropdown. Of course, since we haven't created any translation for French, the English text still displays when French is selected, as the default of gettext is to use the original if a translation is missing, but we will address that shortly.<br />
<br />
</li>
<li>Create a <code>./locale/fr/LC_MESSAGES/</code> set of subdirectories, at the location where you have your <code>index2.php</code> page.<br />
<br />
</li>
<li>Now we need to generate the gettext catalog, or <code>POT</code>, which is the file you will have to provide translators with, in order for them to start creating a translation. Now, while <a href="http://www.poedit.net/" target="_blank">Poedit</a> is supposed to be able to process a PHP file to generate a <code>.pot</code>, I couldn't for the life of me figure out how to do just that with the Windows version. Moreover, the <code>.pot</code> creation is really something you want to do on the server anyway, so, to cut a long story short, we're just going to call <code>xgettext</code>, using a script, to produce our <code>.pot</code> on the server. Here is the content of that script:<br />
<br />
<pre class="brush: bash">#!/bin/sh
xgettext --package-version=1.0 --from-code=UTF-8 --copyright-holder="Pete Batard" --package-name="Rufus Homepage" --msgid-bugs-address=pete@akeo.ie -L PHP -c -d index -o ./locale/index.pot index2.php
sed --in-place ./locale/index.pot --expression='s/SOME DESCRIPTIVE TITLE/Rufus Homepage/'
sed --in-place ./locale/index.pot --expression='1,6s/YEAR/2014/'
sed --in-place ./locale/index.pot --expression='1,6s/PACKAGE/Rufus/'
sed --in-place ./locale/index.pot --expression='1,6s/FIRST AUTHOR/Pete Batard/'
sed --in-place ./locale/index.pot --expression='1,6s/EMAIL@ADDRESS/pete@akeo.ie/'</pre>
<br />
Running the above, in the directory where we have our PHP, creates our <code>index.pot</code> under the <code>./locale/</code> subdirectory, and fills in some important variables that <code>xgettext</code> mysteriously doesn't seem to provide any means to set. As you can see, we used the <code>-c</code> option so that any notes to translators that we added using PHP comments are carried over. <br />
<br />
</li>
<li>Now, we're doing into the part that is generally meant to be done by a translator: download the <code>index.pot</code>, and open it in <a href="http://www.poedit.net/" target="_blank">Poedit</a>. From there, set your target language (here <code>fr_FR</code>) and translate the various strings (eg. <i>"Hello, world"</i> → <i>"Bonjour, monde"</i>). Save your translation as <code>index.po</code>/<code>index.mo</code> (Poedit will create both files) and upload <code>index.mo</code> in <code>./locale/fr/LC_MESSAGES/</code>.<br />
<br />
</li>
<li><i>Voilà!</i> If you did all of the above properly and select French in the dropdown or use a browser that has French as its preferred language, then you should now see the relevant sections translated. <i>"C'est magique, non?"</i><br />
<br />
</li>
<li>From there, you will of course need to add PHP for all of the page content that you want to see translated, by enclosing the English text it into <code><? _(...);?></code> sections (don't worry about the constant switching between HTML and PHP mode - PHP is designed to be very efficient at doing just that!). Once you're happy, just rename your <code>index2.php</code> to <code>index.php</code> (but make sure to remove your <code>index.html</code> first, or you may run into weird issues), and you are fully ready to get your content localized. To do that, just run the <code>POT</code> creation script again (make sure you edit the script if needed, so that is applies to <code>index.php</code> now), and provide <code>index.pot</code> to your translators. Then wait for them to send your their <code>.mo</code> files, edit the code above to add a new array line for each extra language, and watch in awe as visitors experience your site in that new language. Now, it wasn't that hard after all, was it?
<br />
</li>
</ol>
<h3>
<br />
Additional remarks:</h3>
<h4>
Can't we just do away with the double <code>fr_FR</code> and <code>fr</code> in our array?</h4>
Unfortunately, no. The short explanation is, even after you place your translation under a <code>/fr/</code> subdirectory, so that it is used by default when your locale is <code>fr_FR</code>, <code>fr_CA</code>, <code>fr_BE</code>, <code>fr_CH</code> and so on, gettext still can't work with a locale that is just set to <code>fr</code>. This is because, as explained in the Prerequisites, if your system doesn't have an <code>fr</code> or <code>fr.utf8</code> listed with <code>locale -a</code>, gettext just doesn't know how to handle it language.<br />
<br />
Now. the long explanation as to why don't we couldn't just use a single <code>fr_FR</code> in our <code>$langs</code> array is: we want to smartly set our dropdown to French, even when <code>fr_CA</code> is provided, and we can't do something as simple as just picking the first two characters of the array locale, due to the fact that we will also want to support <b>both</b> <code>pt_PT</code> and <code>pt_BR</code> as well as <code>zh_CN</code> and <code>zh_TW</code>, as <b>separate</b> languages (because that's pretty much what they are). So, if we were to just try to isolate the substring up to the underscore, then if we had zh_CN defined before zh_TW in our array, Traditional Chinese speakers would see the dropdown set to Simplified Chinese and that's not what we want.<br />
<br />
Thus, for our dropdown selection comparison, we must provide a value that is the lowest common denominator we want the language to apply to, which can be either a simple <code>fr</code> or <code>es</code>, or a longer <code>pt_BR</code> or <code>zh_CN</code>. But as we explained previously, we can't use that lowest common denominator for locale selection, as gettext might not know how to handle it. And that is why we need to duplicate part of the locale in two places in our array.<br />
<br />
<code><rant></code>Of course, it would be oh so much simpler if OSes agreed that short locales without a region are perfectly valid entities by default, especially as gettext doesn't seem to have any issue accepting them when looking for <code>.mo</code> files, but hey, that's localization for you: no-one EVER manages to get it right...<code></rant></code><br />
<br />
<h4>
How about a real-life example?</h4>
Alright... Since I'm all about Open Source, let me show you exactly how I am applying all of the above to the <a href="https://rufus.akeo.ie/" target="_blank">Rufus Homepage</a>. You can click the following to access the current <a href="https://github.com/pbatard/rufus-web/blob/master/public_html/index.php" target="_blank"><code>index.php</code></a> source for the Rufus site, as well as the <a href="https://rufus.akeo.ie/locale/" target="_blank"><code>locale/<code></code></code></a> subdirectory. There's also <a href="https://github.com/pbatard/rufus/wiki/Localization#wiki-Translating_the_Rufus_Homepage" target="_blank">this guide</a>, that I provide to any translator who volunteered to create a translation for the homepage. Hopefully, these will help you fill any blanks, and allow you to provide an awesome multilingual web page!<br />
<br />
<h4>
What about right-to-left languages?</h4>
Look at the PHP source and look for the use of the <code>$dir</code> variable.Pete Batardhttp://www.blogger.com/profile/09315085625194033420noreply@blogger.com2