Controlling How Neutrino Starts

This chapter includes:

What exactly happens when you start up your system depends on the hardware; this chapter gives a general description.

Note: You need to log in as root in order to change any of the files that the system runs when it starts up.

What happens when you boot?

When you boot your system, the CPU is reset, and it executes whatever is at its reset vector. This is usually a BIOS on x86 boxes, but on other platforms it might be a ROM monitor, or it might be a direct jump into some IPL code for that board. After a ROM monitor runs, it generally jumps to the IPL, and a BIOS might do this as well -- or it might jump directly to the start of the OS image.


Booting a Neutrino system.

The IPL copies the boot image into memory and jumps to the startup. The startup code initializes the hardware, fills the system page with information about the hardware, loads callout routines that the kernel uses for interacting with the hardware, and then loads and starts the microkernel and process manager, procnto (which, starting with release 6.3.0, also manages named semaphores). IPL and startup for a board are generally part of a Board Support Package (BSP) for a particular board.

After procnto has completed its initialization, it runs the commands supplied in the boot script, which might start further customization of the runtime environment either through a shell script or through some program written in C, C++, or a combination of the two.

On a non-x86 disk-booted system, that's pretty well how it happens: most customization is done in the boot script or in a shell script that it calls. For more details, see Making an OS Image in Building Embedded Systems.

For an x86 BIOS boot, this becomes more complex:

BIOS startup

Booting a Neutrino system with an x86 BIOS.

After gaining control, the BIOS configures the hardware, and then it scans for BIOS extension signatures (0x55AA). It calls each BIOS extension (e.g. a network card with a boot ROM or hard disk controller) until one of them boots the system. If none of the BIOS extensions boots the system, the BIOS presents some (usually strange) failure message.

For the network boot case, the boot ROM (usually bootp) downloads an image from a server, copies it into memory, then jumps to the start of the image. The boot image generally needs to run a network stack, and starts some sort of network filesystem to retrieve or access additional programs and files.

You can use the mkifs utility to create the OS image. For a sample buildfile for this sort of image, see the Examples appendix.

For a disk-based boot of a Neutrino desktop system, the process of booting, and especially system initialization, is more complex. After the BIOS has chosen to boot from the disk, the primary boot loader (sometimes called the partition loader) is called. This loader is "OS-agnostic;" it can load any OS. The one installed by Neutrino installations displays the message:

Press F1-F4 to select drive or select partition 1,2,3? 1

After a short timeout, it boots whatever OS system is in the partition prompted for. This loader is /boot/sys/ipl-diskpc1. You can write a loader onto a disk by using dloader.

Loading a Neutrino image

When you choose a QNX partition, the secondary boot loader (sometimes called the OS loader) starts. This loader is Neutrino-specific and resides on the QNX partition. It displays the message:

Hit Esc for .altboot

If you let it time out, the loader loads the operating system image file from /.boot; if you press Escape, the loader gets the image from /.altboot instead. As the loader reads the image, it prints a series of periods. If an error occurs, the loader prints one of the following characters, and the boot process halts:

No OS signature was found.
D or ?
An error occurred reading the disk.

The only difference between the default installed images is that /.boot uses DMA for accessing the EIDE controller, while /.altboot doesn't.

You can find the buildfiles for these images in /boot/build:

You can't rename, unlink, or delete /.boot and /.altboot, although you can change the contents or copy another file to these files. For example, these commands don't work:

mv /.altboot oldaltboot
mv newboot /.altboot

but these do:

cp /.altboot oldaltboot
cp newboot /.altboot

Note: If you modify your boot image, it's a good idea to copy your working image from /.boot to /.altboot, then put your new image in /.boot. That way, if you make a mistake, you can press Escape when you next boot, and you'll have a working image for recovery.


The buildfile for the default .boot image,, includes these lines:

[+script] startup-script = {
    # To save memory make everyone use the libc in the boot image!
    # For speed (less symbolic lookups) we point to instead
    # of
    procmgr_symlink ../../proc/boot/ /usr/lib/

    # Default user programs to priority 10, other scheduler (pri=10o)
    # Tell "diskboot" this is a hard disk boot (-b1)
    # Tell "diskboot" to use DMA on IDE drives (-D1)
	# Start 4 text consoles buy passing "-n4" to "devc-con" 
    # and "devc-con-hid" (-o).
	# By adding "-e", the Linux ext2 filesystem will be mounted
    # as well.
	[pri=10o] PATH=/proc/boot diskboot -b1 -D1 \
-odevc-con,-n4 -odevc-con-hid,-n4

This script starts the system by running diskboot, a program that's used on disk-based systems to boot Neutrino. For the entire file, see the Examples appendix.

  • You can pass options to diskboot (to control how the system boots) and even to device drivers. In this buildfile, diskboot passes the -n4 option to devc-con and devc-con-hid to set the number of virtual consoles.
  • You can set up your machine to not use diskboot. For a sample buildfile, see the Examples appendix.
  • The diskboot gives you the opportunity to update the devb-* drivers on your system. For more information, see "Updating disk drivers," later in this chapter.

When diskboot starts, it prompts:

Press the space bar to input boot options...

Most of these options are for debugging purposes. The diskboot program looks for a Neutrino partition to mount, then runs a series of script files to initialize the system:


Initialization done by diskboot.

The main script for initializing the system is /etc/system/sysinit; you usually keep local system initialization files in the /etc/rc.d directory. For example, if you want to run extra commands at startup on a node, say to mount an NFS drive, you might create a script file named rc.local, make sure it's executable, and put it in the /etc/rc.d directory. For more information, see the description of rc.local later in this chapter.

Here's what diskboot does:

  1. It starts the system logger, slogger. Applications use slogger to log their system messages; you can use sloginfo to view this log.
  2. Next, diskboot runs seedres to read the PnP BIOS and fill procnto's resource database.
  3. Then, diskboot starts pci-bios to support the PCI BIOS.
  4. After that, diskboot starts devb-eide or other disk drivers.

    Note: If you want to pass any options to devb-eide or other drivers, pass them to diskboot in your buildfile.

  5. Next, diskboot looks for filesystems (i.e. partitions and CDs) to mount, which it does by partition type. It recognizes:

    These are mounted as /fs/cdx for CD-ROMs, and /fs/hdx-type-y, where x is a disk number (e.g. /fs/cd0, /fs/hd1), and y is added for uniqueness as it counts upwards. For example, the second DOS partition on hard drive 1 would be /fs/hd1-dos-2.

    By default, one QNX 4 partition is mounted as / instead. This is controlled by looking for a .diskroot file on each QNX 4 partition. If only one such partition has a .diskroot file specifying a mountpoint of /, that partition is unmounted as /fs/hdx-type-y and is then mounted as /; if more than one is found, then diskboot prompts you to select one.

    The .diskroot file is usually empty, but it can contain some commands. For more information, see below.

  6. Optionally, diskboot runs the fat embedded shell, fesh.
  7. Next, diskboot starts the console driver, devc-con-hid (QNX Momentics 6.3.0 Service Pack 3 or later), or devc-con (earlier releases). They're similar, but devc-con-hid supports PS2, USB, and all other human-interface devices.
  8. Finally, diskboot runs the main system-initialization script, /etc/system/sysinit.


The diskboot program uses the .diskroot file to determine which QNX 4 partition to mount as /. The .diskroot file can be one of:

The recognized tokens are:

mount or mountpt
Where to mount this partition. For example:
mount = /home
opt or options
Mount options, either specifically for this mountpoint, or generic. Use commas (not spaces) to separate the options. For example:
options = ro,noexec

For more information, see the documentation for mount and specific drivers in the Utilities Reference, and mount() and mount_parse_generic_args() in the Neutrino Library Reference.

desc or description
The diskboot program recognizes and parses these tokens, but it currently ignores the information.
The diskboot program recognizes the strings qnx4, ext2, and dos, but currently ignores this token. It determine the type based on partition numbers, as described for diskboot, above.


The /etc/system/sysinit file is a script that starts up the main system services. In order to edit this file, you must log in as root.

Note: Before you change the sysinit script, make a backup copy of the latest working version. If you need to create the script, remember to make it executable before you use it (see chmod in the Utilities Reference).

The sysinit script does the following:

  1. It starts slogger, if it isn't yet running.
  2. The script starts the pipe manager, pipe. This manager lets you pass the output from one command as input to another; for more information, see "Redirecting input and output" in Using the Command Line.
  3. Next, sysinit starts mqueue, which manages message queues, using the "traditional" implementation. If you want to use the alternate implementation of message queues that uses asynchronous messaging, you need to start the mq server. For more information, see the Utilities Reference.

    Note: Starting with release 6.3.0, procnto* manages named semaphores, which mqueue used to do (and still does, if it detects that procnto isn't doing so).

  4. If this is the first time you've rebooted after installing the OS, sysinit runs /etc/rc.d/rc.setup-once, which creates various directories and swap files.
  5. Next, sysinit sets the _CS_TIMEZONE configuration string to the value stored in /etc/TIMEZONE. If this file doesn't exist, sysinit sets the time zone to be UTC, or Coordinated Universal Time (formerly Greenwich Mean Time). For more information, see "Setting the time zone" in Configuring Your Environment.
  6. If /etc/rc.d/rc.rtc exists and is executable, sysinit runs it to set up the realtime clock.

    We recommend that you set the hardware clock to UTC time and use the _CS_TIMEZONE configuration string or the TZ environment variable to specify your time zone. The system displays and interprets local times and automatically determines when daylight saving time starts and ends.

    This means that you can have dial-up users in different time zones on the same computer, and they can all see the correct current local time. It also helps when transmitting data from time zone to time zone. You stamp the data with the UTC time stamp, and all of the computers involved should have an easy time comparing time stamps in one time zone to time stamps in another.

    Some operating systems, such as Windows, set the hardware clock to local time. If you install Windows and Neutrino on the same machine, you should set the hardware clock to local time by executing the following command as root and putting it into /etc/rc.d/rc.rtc:

    rtc -l hw

    If you're using Photon, you can just uncheck The hardware clock uses UTC/GMT in phlocale; if you do that, the program creates a rc.rtc file for you that contains the above command.

  7. After setting up the clock, sysinit sets the HOSTNAME environment variable to be the name of the host system. It gets this name from the hostname command, or from /etc/HOSTNAME if that doesn't succeed.

    Note: A hostname can consist only of letters, numbers, and hyphens, and must not start or end with a hyphen. For more information, see RFC 952.

  8. Then, sysinit runs /etc/rc.d/rc.devices to enumerate your system's devices (see "Device enumeration," below). This starts io-net as well as various other drivers, depending on the hardware detected.
  9. If /etc/system/config/useqnet exists and io-net is running, sysinit initializes Neutrino native networking (see the Using Qnet for Transparent Distributed Processing chapter in this guide, and in the Utilities Reference).
  10. Next, sysinit runs the system-initialization script, /etc/rc.d/rc.sysinit (see below).
  11. If that fails, sysinit tries to become a sh or, if that fails, a fesh, so that you at least have a shell if all else fails.

Device enumeration

Neutrino uses a device enumerator manager process, enum-devices, to detect all known hardware devices on the system and to start the appropriate drivers and managers. It's called by the /etc/rc.d/rc.devices script, which /etc/system/sysinit invokes.

The enum-devices manager uses a series of configuration files to specify actions to take when the system detects specific hardware devices. After it reads the configuration file(s), enum-devices queries its various enumerators to discover what devices are on the system. It then matches these devices against the device IDs listed in the configuration files. If the device matches, the action clauses associated with the device are executed. You can find the enumerator configuration files in the /etc/system/enum directory.

For example, the /etc/system/enum/devices/net file includes commands to detect network devices, start the appropriate drivers, and then start netmanager to configure the TCP/IP parameters, using the settings in /etc/net.cfg.

Here's some sample code from a configuration file:

device(pci, ven=2222, dev=1111)
    uniq(sernum, devc-ser, 1)
    driver(devc-ser8250,  "-u$(sernum) $(ioport1),$(irq)" )

This code directs the enumerator to do the following when it detects device 1111 from vender 2222:

  1. Set sernum to the next unique serial device number, starting at 1.
  2. Start the devc-ser8250 driver with the provided options (the device enumerator sets the ioport and irq variables).

To detect new hardware or specify any additional options, you can extend the enumerator configuration files in the following ways:

as described below.

The enumerator reads and concatenates the contents of all configuration files under the chosen directory before it starts processing.

For details on the different command-line options and a description of the syntax for the configuration files, see enum-devices in the Utilities Reference.

oem file or directory

If you're an OEM, and you've written any device drivers, create an oem file or directory under /etc/system/enum to contain the definitions for the devices.

overrides file or directory

If you need to set up devices or options that are specific to your particular system configuration, create an overrides file or directory under /etc/system/enum. The enumerator includes the overrides file or directory last and adds any definitions in it to the set that enum-devices works with. If the overrides file has something that a previously included file also has, the later definition wins.

For example:

Host-specific enumerators

To further customize the enumerators for your system configuration, you can create a /etc/host_cfg/$HOSTNAME/system/enum directory. If this directory structure exists, the rc.devices script tells the enumerators to read configuration files from it instead of from /etc/system/enum.

Note: Even if you have a /etc/host_cfg/$HOSTNAME/system/enum directory, the enumerator looks for an oem directory and overrides file under /etc/system/enum.

An easy way to set up the directory is to copy the /etc/system/enum directory (including all its subdirectories) to your /etc/host_cfg/$HOSTNAME/system directory and then start customizing.


The /etc/system/sysinit script runs /etc/rc.d/rc.sysinit to do local initialization of your system.


Initialization done by /etc/rc.d/rc.sysinit.

The rc.sysinit script does the following:

  1. It starts a secure random-number generator, random, to provide random numbers for use in encryption and so on.
  2. If the /var/dumps directory exists, rc.sysinit starts the dumper utility to capture (in /var/dumps) dumps of processes that terminate abnormally.
  3. If /etc/host_cfg/$HOSTNAME/rc.d/rc.local exists and is executable, rc.sysinit runs it. Otherwise, if /etc/rc.d/rc.local exists and is executable, rc.sysinit runs it. There isn't a default version of this file; you must create it if you want to use it. For more information, see "rc.local," below.
  4. Finally, rc.sysinit runs tinit. By default, the system starts Photon, but if you create a file called /etc/system/config/nophoton, then rc.sysinit tells tinit to use text mode. For more information, see "tinit," below.


As described above, rc.sysinit runs /etc/host_cfg/$HOSTNAME/rc.d/rc.local or /etc/rc.d/rc.local, if the file exists and is executable.

You can use the rc.local file to customize your startup by:

You can also use rc.local to slay running processes and restart them with different options, but this is a heavy-handed approach. Instead of doing this, modify the device enumeration to start the processes with the correct options. For more information, see "Device enumeration," earlier in this chapter.

For example, you can:

Don't use the rc.local file to to set up environment variables, because there's another shell that starts after this script is run, so any environment variable that you set in this file disappears by the time you get a chance to log in.

Note: After you've created rc.local, make sure that you set the executable bit on the file with the command:
chmod +x rc.local


The tinit program initializes the terminal, as follows:

  1. If the -p option is specified, tinit starts Photon.
  2. Otherwise, tinit looks at /etc/config/ttys and runs login or shells, based on the contents of the file.

For more information, including a description of /etc/config/ttys, see tinit in the Utilities Reference.

Updating disk drivers

The Neutrino boot process can dynamically add block I/O (i.e. disk) drivers, letting you boot on systems with newer controllers. The mechanism is simple and not proprietary to QNX Software Systems, so third parties can offer enhanced block drivers without any intervention on our part.

The driver update consists of the drivers themselves (devb-* only) and a simple configuration file. The configuration file is in plain text (DOS or UNIX line endings accepted), with the following format:


The first three fields are mandatory. The fields are as follows:

The file name of the driver.
The string for the boot process to display when trying the driver.
The total time to wait for devices.
Any additional arguments to the driver (e.g. blk cache=512k).

The configuration file must be called drivers.cfg, and you must supply the update on a physical medium, currently a CD-ROM or a USB flash drive. The boot process looks in the root of the filesystem first, and then in a directory called qnxdrvr. This can help reduce clutter in the root of the filesystem.

The source filesystem can be any of the supported filesystems. These filesystems are known to work:

If the update is distributed over the web in zip or tar format with the qnxdrvr structure preserved, an end user simply has to download the archive, unzip it to a USB drive, and insert the USB drive on booting.

You can apply a driver update by pressing Space during booting and selecting F2. The system then completes the startup of the standard block drivers, giving a source filesystem to apply the update from. You're then prompted to choose the filesystem and insert the update media.

Note: If you need to rescan the partitions (for example, to find a USB drive that you inserted after booting), press F12.

Once the files have been copied, you're prompted to reinsert the QNX Momentics installation CD if applicable. The block drivers are then restarted.

This mechanism also lets you update existing drivers or simply modify their arguments (e.g. PCI ID specification).

If you're installing, then the installation program copies the updated drivers to /sbin and the configuration file to /boot/sys. It then makes copies of the standard build files in /boot/build (except multicore ones) and calls them and These files are then used to create new image files called qnxbase-drvrup.ifs and qnxbasedma-drvrup.ifs in /boot/fs. The DMA version of this new file is copied to /.boot, and the non-DMA version is copied to /.altboot.

Note: The installation program doesn't rebuild multicore (SMP) images.

Applying a driver update patch after you've installed QNX Neutrino

If you're updating or adding drivers to an already existing QNX Neutrino system using this mechanism, you must manually copy the drivers to the correct directory, and you must modify the boot image to use the new driver:

To modify the boot image:
  1. Boot the machine and apply the driver updates.
  2. Once the machine has booted, copy the following from the driver update disk used in step 1:
    1. Copy the new devb-* drivers to /sbin.
    2. Copy drivers.cfg to somewhere under /. If you put it in a directory that's in the mkifs search path (e.g. /sbin, /boot/sys), mkifs will find it automatically.
  3. Copy the build file (typically to
  4. Edit the build file and do the following:
  5. As a safety precaution (so you'll be sure to have at least one image that boots):
    cp /.boot /.altboot
  6. mkifs /.boot


Here are some problems you might encounter while customizing how your system starts up:

The applications I put in rc.local don't run.
Check the following:
I messed up my rc.local file, and now I can't boot.
You can: