2011-07-15

Creating a bootable UEFI DUET USB stick

Now that one of my patches has made it into the UEFI/EDK2 SVN repository, I'm going to provide a quite guide on the easiest way to create a bootable UEFI USB stick for legacy platforms, on Windows.
In case you are not familiar with EFI/UEFI, it is very much possible to run EFI even on legacy (i.e. non EFI) hardware, through a process called DUET, which provides a complete EFI emulation layer using the underlying BIOS. This is great for testing, without having to modify your hardware in any way.

It is all provided by the DuetPkg of the EDK2. However, the main issue with EDK2 is that it may be a difficult to get working, unless you used the same development tools as the EDK2 developers, which, on Windows would default to Visual Studio 2008, which is not free.

Isn't there simple way to use freely available tools on Windows, to easily compile a DUET USB bootstick?
Glad you asked. There is now, and here is the complete process that goes with it:
  • Download and install the Windows Server 2003 WinDDK, v3790.1830, using this direct link (230 MB ISO download). This is of course not the latest DDK but unless you want to waste time in extra configuration, you might as well go with the supported version. If I find the time to fix EDK2 for the latest, I may send another patch to the EDK2 team, but for now, this will have to do.
  • (Optional) If you plan to use ASL to compile ACPI, which isn't needed for DUET, create a C:\ASL directory, download the Microsoft ASL Assembler/Compiler archive from here, open the .exe with 7-zip. Then open the .msi from the .exe (Open Inside), find the file _C6C6461BF87C49D66033009609D53FA5, extract it to your C:\ASL directory and rename it asl.exe (count on Microsoft to provide an installer... for a mere 55 KB exe). Also note that the ASL compiler is for 32 bit only. If you're running 64 bit, tough luck.
  • Mount or burn the ISO and install the WinDDK, to the default C:\WINDDK\3790.1830 directory.
  • If needed, download and install TortoiseSVN, or any other SVN client.
  • Fetch the latest EDK2 from SVN, using https://edk2.svn.sourceforge.net/svnroot/edk2/trunk/edk2 as the SVN repository.
  • Open a DDK console ("Development Kits" → "Win2k3 Free x64 Build Environment" or "Win2k3 Free x86 Build Environment") and navigate to your edk2 direcvtory
  • Run the command:
    edksetup.bat
    This will initialize your environment by creating the required files in the Conf\ directory. You can safely ignore the warning about cygwin.
  • Open the file Conf\target.txt and edit the TOOL_CHAIN_TAG line to have:
    TOOL_CHAIN_TAG = DDK3790xASL
    You also may want to change MAX_CONCURRENT_THREAD_NUMBER and make sure TARGET_ARCH is properly set to either IA32 or X64 according to your needs, as it avoids having to provide extra arch parameter when invoking the commands below).
  • In the console, run (32 bit):
    build -p DuetPkg\DuetPkgIA32.dsc
    or (64 bit)
    build -p DuetPkg\DuetPkgX64.dsc
    It may take a while but it should complete successfully.
  • cd to DuetPkg\ and run:
    PostBuild.bat
  • Plug in your USB key and check its drive letter. In the following I will assume it is F:, and run:
    CreateBootDisk.bat usb F: FAT16
  • Unplug and replug the key as requested, and run:
    CreateBootDisk.bat usb F: FAT16 step2
You should now have a bootable UEFI USB key. Plug that into a PC, boot it, and you will be welcomed into the wonderful world of EFI.
Note that if you rebuild any of the packages, you only have to perform the last step to update the key.

2011-07-11

Introducing UBRX - an Universal BIOS Recovery Console for x86 PCs

Following up on the previous BIOS generation endeavour, as well as wanting to demonstrate to any naysayer that a universal recovery console (or panic room) in the BIOS bootblock of an x86 system is not something from the realm of fantasy, it is my pleasure to introduce UBRX, the Universal Bootblock Recovery console for X86 systems.

In its current version (0.1), it is mostly a Proof of Concept aimed at the coreboot team, since the crafting of a 'panic room' type bootblock has been on their mind for some time, and my understanding is that their approach would be to make such a bootblock platform specific (the motherboard would first need to be supported by coreboot), whereas I believe that there is a way to be a lot more generic to produce a 'panic room' feature that works on all platforms, even the ones that coreboot doesn't support yet. The current bootblock is less than 2 KB, which ought to leave plenty of room for a Y-modem, CAR, and a full console implementation.

If your aim is to develop a real BIOS from scratch, and you have a PC that is more recent than the year 2000 (most PCs from the late nineties should work too) UBRX might also be of interest to you, since it should provide you with a serial console. In short, UBRX can also be used as a base for BIOS development, regardless of the machine you have.

What UBRX implements then is a safe method for the detection of a PnP Super I/O chip, as well as the detection and initialisation of a 16550 compatible UART there, to then provide on-demand access to a serial console. The emphasis here has to be with the safety of the detection being performed, as it is intended to be executed at every boot, without resulting in advert consequences for non PnP Super I/O hardware residing in the same hardware space as the one we check, or any non UART function residing in the Super I/O. To find more about how we achieve detection safety, I suggest you check the "Detection Primer" section of the UBRX README.

Now, of course, as with "unlimited" broadband, the "universal" applicability of UBRX comes with some understanding that fairness needs to be applied to the claim. As such, non PnP Super I/Os, PnP Super I/Os that require uncommon initialization (I'm looking at you ITE), and platforms using CPUs other than Intel or AMD are not currently supported. Also, due to the lack of datasheets from nVidia, it is very likely that UBRX may not work with motherboards sporting an nVidia chipset. However, if you have an Intel motherboard with an ICH# chipset, or an AMD motherboard with an SB##0 SouthBridge (which probably covers more than 90% of PCs from the last few years), UBRX is expected to work.

Finally, as UBRX is only a Proof of Concept for now, the console is limited to a serial repeater, so there's not much you can do with it. Especially it it missing CAR (Cache As RAM) initialization and Y-modem functionality, to be able to transfer and run bare metal executables to do interesting things, such as flashing the remainder of the BIOS, initializing the full range of RAM according to the hardware, or loading a debugger. Therefore, if you choose to test and flash UBRX, remember that you must have means to reflash your BIOS using an external program, as your PC will become unusable.

Please head to the github project for more info. A source tar archive of the source can also be downloaded here.

2011-07-07

git remote repository

If you're like me, you probably have a small Linux server somewhere, begging to be used as a remote central git repository, be it only for the purpose of backing up your development projects on a second machine. A Linux based wireless router or plug computer is perfect for such a job, and, as with any remote VCS, can be invaluable with helping to recover from mishaps on your development rig. And of course, if you're doing development and testing on different computers at once, being able to push/pull from a central server makes your life a lot easier.

Git daemon

By default, git includes a daemon utility to serve clients using the git:// protocol, so you don't even have to install anything extra. The one thing that may matter to you is that the default git daemon does not provide any authentication mechanism, so anybody on your network will be able to push/pull files onto a project. For a SOHO, this isn't a big deal, but in a corporate environment, you may want to look into something a bit more secure.

Our first order of the day then is to decide the directory we want to serve git project from on the server and start the git daemon. In this example, I will use /usr/src on the server. The command to run git daemon then is:
git daemon --reuseaddr --base-path=/usr/src --export-all --verbose --enable=receive-pack --detach --syslog
The --export-all allows pull/clone to be issued by remote clients, regardless of whether a git-daemon-export-ok file exists in the git directory. The --enable=receive-pack is required if you want to be able to push from a client, as it is disabled by default and you will get the error: 'receive-pack': service not enabled for './.git'. The rest of the options should be fairly explicit (if not, see the git daemon page. As you can see, these settings are as permissive as can be, which is probably what you want if you are starting with using git as a server.

Creating the server repository

With the daemon running (check your syslog or messages to confirm), we can now create the directory we want to be used as the origin on our server. Let's call it myproject. Thus:
root@git-server:~# cd /usr/src/
root@git-server:/usr/src# mkdir myproject
root@git-server:/usr/src# cd myproject/
root@git-server:/usr/src/myproject# git init
Initialized empty Git repository in /usr/src/myproject/.git/
Note that the git people historically recommend creating project directory with a .git extension, but there doesn't seem to be much point to it.

Indiana Git and the Intuitiveness of Doom

At this stage, if you have existing files for your project, you have the choice of transferring them to the server and git add + git commit them, or first clone the empty directory on your client and then add the files there. With the latter being the most likely situation, this is what I will demonstrate, therefore (the following was done on Linux - if you are using Msys-git on Windows, see below):
user@git-client:~$ git clone git://git-server/myproject
Cloning into myproject...
warning: You appear to have cloned an empty repository.
user@git-client:~$ cd myproject/
# in real life you would copy, add and commit existing project files there
user@git-client:~/myproject$ touch README
user@git-client:~/myproject$ git add README
user@git-client:~/myproject$ git commit -m "added README"
[master (root-commit) e3cb915] added README
 0 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 README
So far so good, but then:
user@git-client:~/myproject$ git push
No refs in common and none specified; doing nothing.
Perhaps you should specify a branch such as 'master'.
Everything up-to-date
OK, googling around shows you need to add origin master to that first push, so:
user@git-client:~/myproject$ git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 208 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To git://git-server/myproject
 ! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to 'git://git-server/myproject'
Now that's even worse! What the heck?

Long story short, git has a design limitation that prevents it to update both the working directory structure (the user visible files such as 'README', .c/.h, etc.) and the index (the hashed nodes, that contain the same information plus details of the various changes the files went trough) at the same time, on the server. Yes, if they really wanted to, the git developers could easily work around that limitation by providing an option to automatically force a sync on the remote working dir from the index, when index changes have been pushed, but they have decided that, since there exists individual cases where such an option would cause harm, they might as well prevent everybody from doing so altogether, regardless of whether people might be fully aware of what they are doing and accept the consequences.
So this means that, as long as git sees the server git directory being checked out with the 'master' branch, which is the case by default even on an empty directory, as well as the client git repo also using the 'master' branch, which is also the default, it considers that the server repository has precedence (i.e. that there might exist uncommitted changes in the server repo, that have higher priority than any committed client ones), and therefore, will reject remote requests to update the index, such as a push.

To work around that then, you need to make sure that the server doesn't have the 'master' branch checked out locally, when you are issuing a git push from the client. Easiest way to do that is to check into a different branch, away from 'master', on the server, so if we just checkout onto a 'dummy' branch that we create (-b option), git should be happy again. Let's try that:
root@git-server:/usr/src/myproject# git checkout -b dummy
fatal: You are on a branch yet to be born
Right... And people ask me why I'm still not entirely convinced that git is the best invention since sliced bread. Yes, I'll be the first to say that it is usually miles better than the competition, and when it works, it's just brilliant, but quite frankly, there is such thing as intuitive, and as far as intuitive goes, there is still a lot of room for improvement with git.
As the error indicates, the problem this time is that our repo is bare. It doesn't even have a HEAD. Now, because we don't have all day to figure out each of git's numerous intricacies, we just take matters into our own hands and hook into the git index directly with:
root@git-server:/usr/src/myproject# git symbolic-ref HEAD refs/heads/dummy
This should ensure that git no longer sees us as potentially modifying the (still non-existing) 'master' branch on the server and stop bugging us.

Then, if you go back to the client:
user@git-client:~/myproject~ git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 208 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To git://git-server/myproject
 * [new branch]      master -> master
At last, success!! From there on you should be able to use a naked git push on the client and roll.
If you later want to edit/modify the repo content on the server (the server working dir will remain empty even after a push, because the working directory is no longer following the 'master' HEAD), you can do so by switching back to the 'master' branch with git checkout master. Then an up to date version of what you have pushed should be in your server working dir. However, remember that, if you are planning to remotely push some more, you need to use either git checkout [-b] dummy (if you don't mind having a dummy branch created) or git symbolic-ref HEAD refs/heads/dummy (if you don't want a branch) on the server when you are done, to prevent git from rejecting the operation. If you use the latter, be mindful that you will get a warning: remote HEAD refers to nonexistent ref, unable to checkout error (yes git, if it results in a failure, it is an error, not a warning), so you may have to go back to your server and set symbolic-ref to 'master' before cloning. And if you use the former, make sure that's you're working on the 'master' branch and not 'dummy' after cloning.

Yay, now we have a working git server, and more importantly, we know how to work around the various counter-intuitive git limitations to create repositories on it... Of course, that is, as long as you don't use msys-git on Windows 7...

What's wrong with git daemon and msys-git?

OK, so we are sorted out on Linux. How about we try the same thing on Windows 7 with msys-git?
user@WINDOWS ~
$ git clone git://git-server/myproject
Cloning into myproject...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 8 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (8/8), done.

user@WINDOWS ~
$ cd myproject/

user@WINDOWS ~/myproject (master)
$ echo "test" > README

user@WINDOWS ~/myproject (master)
$ git add README
warning: LF will be replaced by CRLF in README.
The file will have its original line endings in your working directory.

user@WINDOWS ~/myproject (master)
$ git commit -m "edited README"
[master warning: LF will be replaced by CRLF in README.
The file will have its original line endings in your working directory.
a0f27c3] edited README
warning: LF will be replaced by CRLF in README.
The file will have its original line endings in your working directory.
 1 files changed, 1 insertions(+), 1 deletions(-)
So far, so good. But then, if you issue git push the command hangs forever:
user@WINDOWS ~/myproject (master)
$ git push
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3)
On the server, the log doesn't seem to indicate that much is amiss:
Jul  7 16:22:23 git-server git-daemon[7324]: Connection from 1.2.3.4:51238
Jul  7 16:22:23 git-server git-daemon[7324]: Extended attributes (13 bytes) exist <host=git-server>
Jul  7 16:22:23 git-server git-daemon[7324]: Request receive-pack for '/myproject'
Congratulations! You've just encountered msys-git bug #457 for which no patch exists yet. If you want a workaround, you can either switch to using ssh (but then you need to sort out authentication, which, for a single-user internal-only VCS operation is a bit of an overkill) or share your repository with samba, and then add the following in the [remote "origin"] section from .git/config:
pushurl = file:////git-server/src/myproject
As can be inferred the above simply forces git to use Samba rather than the git protocol for push operations. With this workaround in place, you are truly set to use a remote git server.

ADDON: The following script might be used as a very useful first commit for the server repository, as it helps toggle between the ability to commit files on the server and allowing client commits:
#!/bin/sh
git branch | grep -q \*
if [ $? -eq 0 ]; then
  git symbolic-ref HEAD refs/heads/dummy
  echo "Switched to fake branch 'dummy'"
else
  git symbolic-ref HEAD refs/heads/master
  echo "Switched to branch 'master'"
fi

2011-07-03

Windows driver utilities and other good stuff

Courtesy of Xiaofan.
If you test driver installations quite often under Windows 7, then PnPUtil from Microsoft can be quite useful.

The RAPR GUI driver store management utility can also help easily identify the inf associated with a driver.

Note that PnPUtil has a bug under Vista that output redirection does not work and it affects RAPR as well. Travis, of libusbK, and libusbDotNet fame has a console version of of driver store management utility located here. The download includes the source as well.
Those who are interested can probably enhance it and create a GUI version for it. Do not use it under XP though.

2011-07-01

com0com signed drivers

If you are using VMware to play with virtual serial consoles on Windows, and especially if you want to have access to that console right from the start, a very nice tool to have is com0com, which is a null-modem emulator that creates a pair of connected virtual COM ports.
VMware does of course offer the possibility to use a named pipe, but you can only connect to it after the image has started running, which may result in bootup messages being truncated. And if you use the VMware option of directing serial to a file, then you can no longer interact.

Com0com solves this problem nicely by creating a pair of virtual COM ports, that are always available, and that you can use with VMware on one end, and putty or HyperTerminal on the other.

The only problem however is that, as of version 2.2.2.0, the current com0com drivers are unsigned. However, since I had a driver signing certificate begging to be used (before it expired), I have now made a signed version of the drivers available, so that you can easily install com0com without having to change the boot properties of your Vista or Windows 7 installation.

The signed 2.2.2.0 com0com drivers can be obtained here. They are compatible with any Windows 10, Windows 8.x, Windows Vista or Windows XP system.


Important Note: It is not because the signing certificate for the com0com driver above is now expired that this driver is invalid and/or should not be used. Timestamping was used at the time of the signature, therefore the driver package above is exactly as good as one using a non expired certificate. If signed driver packages ceased to be valid after the certificate expired, this would close the door for developers using a code signing certificate that they renew on a yearly basis, as it would require their users to upgrade their driver of applications every single year, which is of course nonsense. In short, regardless of what the expiration date on the certificate is, this driver package is no less usable or less safe that one with a non expired certificate.

Note that I stripped the GUI installer, so only the commandline one is available. Even with commandline, the installation is very easy however. All that you need to do is:
  1. Download and extract the files (using 7-zip if needed)
  2. Open an elevated command prompt in the i386\ (32 bit) or x64\ (64 bit) directory
  3. Run the command:
    setupc
  4. Enter the install command:
    install - -
  5. Accept the prompts.
Once the driver is installed, two connected COM ports will be created: CNCA0 and CNCB0, which you can use, one with VMware, and the other with putty.

Now you don't have to miss any bootup messages and you can use an interactive COM port that persists regardless of the power status of the virtual machine. Enjoy!