Compiling and Debugging

Choosing the version of the OS

The QNX Momentics development suite lets you install and work with multiple versions of Neutrino. Whether you're using the command line or the IDE, you can choose which version of the OS to build programs for.

Note: Coexistence of 6.3.0 and 6.2.1 is supported only on Windows and Solaris hosts.

When you install QNX Momentics, you get a set of configuration files that indicate where you've install the software. The QNX_CONFIGURATION environment variable stores the location of the configuration files for the installed versions of Neutrino; on a self-hosted Neutrino machine, the default is /etc/qconfig.

If you're using the command-line tools, use the qconfig utility to configure your machine to use a specific version of Neutrino.

Note: On Windows hosts, use QWinCfg, a graphical front end for qconfig. You can launch it from the Start menu.

Here's what qconfig does:

When you start the IDE, it uses your current qconfig choice as the default version of the OS; if you haven't chosen a version, the IDE chooses an entry from the directory identified by QNX_CONFIGURATION. If you want to override the IDE's choice, you can choose the appropriate build target. For details, see "Version coexistence" in the Concepts chapter of the IDE User's Guide.

Neutrino uses these environment variables to locate files on the host machine:

The location of host-specific files.
The location of target backends on the host machine.

The qconfig utility sets these variables according to the version of QNX Momentics that you specified.

Making your code more portable

To help you create portable applications, QNX Neutrino lets you compile for specific standards and include QNX- or Neutrino-specific code.

Conforming to standards

The header files supplied with the C library provide the proper declarations for the functions and for the number and types of arguments used with them. Constant values used in conjunction with the functions are also declared. The files can usually be included in any order, although individual function descriptions show the preferred order for specific headers.

When you use the -ansi option, qcc compiles strict ANSI code. Use this option when you're creating an application that must conform to the ANSI standard. The effect on the inclusion of ANSI- and POSIX-defined header files is that certain portions of the header files are omitted:

You can then use the qcc -D option to define feature-test macros to select those portions that are omitted. Here are the most commonly used feature-test macros:

Include those portions of the header files that relate to the POSIX standard (IEEE Standard Portable Operating System Interface for Computer Environments - POSIX 1003.1, 1996)
Make the libraries use 64-bit file offsets.
Include declarations for the functions that support large files (those whose names end with 64).
Include everything defined in the header files. This is the default.

Feature-test macros may be defined on the command line, or in the source file before any header files are included. The latter is illustrated in the following example, in which an ANSI- and POSIX-conforming application is being developed.

#define _POSIX_C_SOURCE=199506
#include <limits.h>
#include <stdio.h>
#if defined(_QNX_SOURCE)
  #include "non_POSIX_header1.h"
  #include "non_POSIX_header2.h"
  #include "non_POSIX_header3.h"

You'd then compile the source code using the -ansi option.

The following ANSI header files are affected by the _POSIX_C_SOURCE feature-test macro:

The following ANSI and POSIX header files are affected by the _QNX_SOURCE feature-test macro:

Header file Type
<ctype.h> ANSI
<fcntl.h> POSIX
<float.h> ANSI
<limits.h> ANSI
<math.h> ANSI
<process.h> extension to POSIX
<setjmp.h> ANSI
<signal.h> ANSI
<sys/stat.h> POSIX
<stdio.h> ANSI
<stdlib.h> ANSI
<string.h> ANSI
<termios.h> POSIX
<time.h> ANSI
<sys/types.h> POSIX
<unistd.h> POSIX

Including QNX- or Neutrino-specific code

If you need to include QNX- Neutrino-specific code in your application, you can wrap it in an #ifdef to make the program more portable. The qcc utility defines these preprocessor symbols (or manifest constants):

The target is a QNX operating system (QNX 4 or QNX Neutrino).
The target is the QNX Neutrino operating system.

For example:

#if defined(__QNX__)
   /* QNX-specific (any flavor) code here */

   #if defined(__QNXNTO__)
      /* QNX Neutrino-specific code here */
      /* QNX 4-specific code here */

For information about other preprocessor symbols that you might find useful, see the Manifests chapter of the Neutrino Library Reference.

Header files in /usr/include

The ${QNX_TARGET}/usr/include directory includes at least the following subdirectories (in addition to the usual sys):

ARPA header files concerning the Internet, FTP and TELNET.
Descriptions of various hardware devices.
arm, mips, ppc, sh, x86
CPU-specific header files. You typically don't need to include them directly -- they're included automatically. There are some files that you might want to look at:
malloc, malloc_g
Memory allocation; for more information, see the Heap Analysis: Making Memory Errors a Thing of the Past chapter in this guide.
Network interface descriptions.
netinet, netinet6, netkey
Header files concerning TCP/IP.
Header files concerning the Photon microGUI; for more information, see the Photon documentation.
Descriptions for the Simple Network Management Protocol (SNMP).

Self-hosted or cross-development

In the rest of this chapter, we'll describe how to compile and debug a Neutrino system. Your Neutrino system might be anything from a deeply embedded turnkey system to a powerful multiprocessor server. You'll develop the code to implement your system using development tools running on the Neutrino platform itself or on any other supported cross-development platform.

Neutrino supports both of these development types:

This section describes the procedures for compiling and debugging for both types.

A simple example

We'll now go through the steps necessary to build a simple Neutrino system that runs on a standard PC and prints out the text "Hello, world!" -- the classic first C program.

Let's look at the spectrum of methods available to you to run your executable:

If your environment is: Then you can:
Self-hosted Compile and link, then run on host
Cross-development, network filesystem link Compile and link, load over network filesystem, then run on target
Cross-development, debugger link Compile and link, use debugger as a "network filesystem" to transfer executable over to target, then run on target
Cross-development, rebuilding the image Compile and link, rebuild entire image, reboot target.

Which method you use depends on what's available to you. All the methods share the same initial step -- write the code, then compile and link it for Neutrino on the platform that you wish to run the program on.

Note: You can choose how you wish to compile and link your programs: you can use tools with a command-line interface (via the qcc command) or you can use an IDE (Integrated Development Environment) with a graphical user interface (GUI) environment. Our samples here illustrate the command-line method.

The "Hello, world!" program itself is very simple:

#include <stdio.h>

main (void)
    printf ("Hello, world!\n");
    return (0);

You compile it for PowerPC (big-endian) with the single line:

qcc -V gcc_ntoppcbe hello.c -o hello

This executes the C compiler with a special cross-compilation flag, -V gcc_ntoppcbe, that tells the compiler to use the gcc compiler, Neutrino-specific includes, libraries, and options to create a PowerPC (big-endian) executable using the GCC compiler.

To see a list of compilers and platforms supported, simply execute the command:

qcc -V

If you're using an IDE, refer to the documentation that came with the IDE software for more information.

At this point, you should have an executable called hello.


If you're using a self-hosted development system, you're done. You don't even have to use the -V cross-compilation flag (as was shown above), because the qcc driver will default to the current platform. You can now run hello from the command line:


Cross-development with network filesystem

If you're using a network filesystem, let's assume you've already set up the filesystem on both ends. For information on setting this up, see the Sample Buildfiles appendix in Building Embedded Systems.

Using a network filesystem is the richest cross-development method possible, because you have access to remotely mounted filesystems. This is ideal for a number of reasons:

For a network filesystem, you'll need to ensure that the shell's PATH environment variable includes the path to your executable via the network-mounted filesystem. At this point, you can just type the name of the executable at the target's command-line prompt (if you're running a shell on the target):


Cross-development with debugger

Once the debug agent is running, and you've established connectivity between the host and the target, you can use the debugger to download the executable to the target, and then run and interact with it.

Download/upload facility

When the debug agent is connected to the host debugger, you can transfer files between the host and target systems. Note that this is a general-purpose file transfer facility -- it's not limited to transferring only executables to the target (although that's what we'll be describing here).

In order for Neutrino to execute a program on the target, the program must be available for loading from some type of filesystem. This means that when you transfer executables to the target, you must write them to a filesystem. Even if you don't have a conventional filesystem on your target, recall that there's a writable "filesystem" present under Neutrino -- the /dev/shmem filesystem. This serves as a convenient RAM-disk for downloading the executables to.

Cross-development, deeply embedded

If your system is deeply embedded and you have no connectivity to the host system, or you wish to build a system "from scratch," you'll have to perform the following steps (in addition to the common step of creating the executable(s), as described above):

  1. Build a Neutrino system image.
  2. Transfer the system image to the target.
  3. Boot the target.

Step 1: Build a Neutrino system image.

You use a buildfile to build a Neutrino system image that includes your program. The buildfile contains a list of files (or modules) to be included in the image, as well as information about the image. A buildfile lets you execute commands, specify command arguments, set environment variables, and so on. The buildfile will look like this:

[virtual=ppcbe,elf] .bootstrap = {
    PATH=/proc/boot procnto-800
[+script] .script = {
    devc-serppc800 -e -c20000000 -b9600 smc1 &

[type=link] /dev/console=/dev/ser1
[type=link] /usr/lib/

hello &

The first part (the four lines starting with [virtual=ppcbe,elf]), contains information about the kind of image we're building.

The next part (the five lines starting with [+script]) is the startup script that indicates what executables (and their command-line parameters, if any) should be invoked.

The [type=link] lines set up symbolic links to specify the serial port and shared library file we want to use.

Note: The runtime linker is expected to be found in a file called, but the runtime linker is currently contained within the file, so we make a process manager symbolic link to it.

The [perms=+r,+x] lines assign permissions to the binaries that follow -- in this case, we're setting them to be Readable and Executable.

Then we include the C shared library,

Then the line [data=copy] specifies to the loader that the data segment should be copied. This applies to all programs that follow the [data=copy] attribute. The result is that we can run the executable multiple times.

Finally, the last part (the last two lines) is simply the list of files indicating which files should be included as part of the image. For more details on buildfile syntax, see the mkifs entry in the Utilities Reference.

Our sample buildfile indicates the following:

Let's assume that the above buildfile is called hello.bld. Using the mkifs utility, you could then build an image by typing:

mkifs hello.bld hello.ifs

Step 2: Transfer the system image to the target.

You now have to transfer the image hello.ifs to the target system. If your target is a PC, the most universal method of booting is to make a bootable floppy diskette.


If you're developing on a platform that has TCP/IP networking and connectivity to your target, you may be able to boot your Neutrino target system using a BOOTP server. For details, see the "BOOTP section" in the Customizing IPL Programs chapter in Building Embedded Systems.

If your development system is Neutrino, transfer your image to a floppy by issuing this command:

dinit -f hello.ifs /dev/fd0

If your development system is Windows NT or Windows 95/98, transfer your image to a floppy by issuing this command:

dinit -f hello.ifs a:

Step 3: Boot the target.

Place the floppy diskette into your target system and reboot your machine. The message "Hello, world!" should appear on your screen.

Using libraries

When you're developing code, you almost always make use of a library -- a collection of code modules that you or someone else has already developed (and hopefully debugged). Under Neutrino, we have three different ways of using libraries:

Static linking

You can combine your modules with the modules from the library to form a single executable that's entirely self-contained. We call this static linking. The word "static" implies that it's not going to change -- all the required modules are already combined into one executable.

Dynamic linking

Rather than build a self-contained executable ahead of time, you can take your modules and link them in such a way that the Process Manager will link them to the library modules before your program runs. We call this dynamic linking. The word "dynamic" here means that the association between your program and the library modules that it uses is done at load time, not at linktime (as was the case with the static version).

Runtime loading

There's a variation on the theme of dynamic linking called runtime loading. In this case, the program decides while it's actually running that it wishes to load a particular function from a library.

Static and dynamic libraries

To support the two major kinds of linking described above, Neutrino has two kinds of libraries: static and dynamic.

Static libraries

A static library is usually identified by a .a (for "archive") suffix (e.g. libc.a). The library contains the modules you want to include in your program and is formatted as a collection of ELF object modules that the linker can then extract (as required by your program) and bind with your program at linktime.

This "binding" operation literally copies the object module from the library and incorporates it into your "finished" executable. The major advantage of this approach is that when the executable is created, it's entirely self-sufficient -- it doesn't require any other object modules to be present on the target system. This advantage is usually outweighed by two principal disadvantages, however:

Dynamic libraries

A dynamic library is usually identified by a .so (for "shared object") suffix (e.g. Like a static library, this kind of library also contains the modules that you want to include in your program, but these modules are not bound to your program at linktime. Instead, your program is linked in such a way that the Process Manager causes your program to be bound to the shared objects at load time.

The Process Manager performs this binding by looking at the program to see if it references any shared objects (.so files). If it does, then the Process Manager looks to see if those particular shared objects are already present in memory. If they're not, it loads them into memory. Then the Process Manager patches your program to be able to use the shared objects. Finally, the Process Manager starts your program.

Note that from your program's perspective, it isn't even aware that it's running with a shared object versus being statically linked -- that happened before the first line of your program ran!

The main advantage of dynamic linking is that the programs in the system will reference only a particular set of objects -- they don't contain them. As a result, programs are smaller. This also means that you can upgrade the shared objects without relinking the programs. This is especially handy when you don't have access to the source code for some of the programs.


When a program decides at runtime that it wants to "augment" itself with additional code, it will issue the dlopen() function call. This function call tells the system that it should find the shared object referenced by the dlopen() function and create a binding between the program and the shared object. Again, if the shared object isn't present in memory already, the system will load it. The main advantage of this approach is that the program can determine, at runtime, which objects it needs to have access to.

Note that there's no real difference between a library of shared objects that you link against and a library of shared objects that you load at runtime. Both modules are of the exact same format. The only difference is in how they get used.

By convention, therefore, we place libraries that you link against (whether statically or dynamically) into the lib directory, and shared objects that you load at runtime into the lib/dll (for "dynamically loaded libraries") directory.

Note that this is just a convention -- there's nothing stopping you from linking against a shared object in the lib/dll directory or from using the dlopen() function call on a shared object in the lib directory.

Platform-specific library locations

The development tools have been designed to work out of their processor directories (x86, ppcbe, etc.). This means you can use the same toolset for any target platform.

If you have development libraries for a certain platform, then put them into the platform-specific library directory (e.g. /x86/lib), which is where the compiler tools will look.

Note: You can use the -L option to qcc to explicitly provide a library path.

Linking your modules

To link your application against a library, use the -l option to qcc, omitting the lib prefix and any extension from the library's name. For example, to link against libsocket, specify -l socket.

You can specify more than one -l option. The qcc configuration files might specify some libraries for you; for example, qcc usually links against libc. The description of each function in the Neutrino Library Reference tells you which library to link against.

By default, the tool chain links dynamically. We do this because of all the benefits mentioned above.

If you want to link statically, then you should specify the -static option to qcc, which will cause the link stage to look in the library directory only for static libraries (identified by a .a extension).

Note: For this release of Neutrino, you can't use the floating point emulator ( in statically linked executables.

Although we generally discourage linking statically, it does have this advantage: in an environment with tight configuration management and software QA, the very same executable can be regenerated at linktime and known to be complete at runtime.

To link dynamically (the default), you don't have to do anything.

To link statically and dynamically (some libraries linked one way, other libraries linked the other way), the two keywords -Bstatic and -Bdynamic are positional parameters that can be specified to qcc. All libraries specified after the particular -B option will be linked in the specified manner. You can have multiple -B options:

qcc ... -Bdynamic -l1 -l2 -Bstatic -l3 -l4 -Bdynamic -l5

This will cause libraries lib1, lib2, and lib5 to be dynamically linked (i.e. will link against the files, and, and libraries lib3 and lib4 to be statically linked (i.e. will link against the files lib3.a and lib4.a).

You may see the extension .1 appended to the name of the shared object (e.g. This is a version number. Use the extension .1 for your first revision, and increment the revision number if required.

You may wish to use the above "mixed-mode" linking because some of the libraries you're using will be needed by only one executable or because the libraries are small (less than 4 KB), in which case you'd be wasting memory to use them as shared libraries. Note that shared libraries are typically mapped in 4-KB pages and will require at least one page for the "text" section and possibly one page for the "data" section.

Note: When you specify -Bstatic or -Bdynamic, all subsequent libraries will be linked in the specified manner.

Creating shared objects

To create a shared object suitable for linking against:

  1. Compile the source files for the library using the -shared option to qcc.
  2. To create the library from the individual object modules, simply combine them with the linker (this is done via the qcc compiler driver as well, also using the -shared command-line option).

Note: Make sure that all objects and "static" libs that are pulled into a .so are position-independent as well (i.e. also compiled with -shared).

If you make a shared library that has to static-link against an existing library, you can't static-link against the .a version (because those libraries themselves aren't compiled in a position-independent manner). Instead, there's a special version of the libraries that has a capital "S" just before the .a extension. For example, instead of linking against libsocket.a, you'd link against libsocketS.a. We recommend that you don't static-link, but rather link against the .so shared object version.

Specifying an internal name

When you're building a shared object, you can specify the following option to qcc:


(You might need the quotes to pass the option through to the linker intact, depending on the shell.)

This option sets the internal name of the shared object to name instead of to the object's pathname, so you'd use name to access the object when dynamically linking. You might find this useful when doing cross-development (e.g. from a Windows NT system to a Neutrino target).


Now let's look at the different options you have for debugging the executable. Just as you have two basic ways of developing (self-hosted and cross-development), you have similar options for debugging.

Debugging in a self-hosted environment

The debugger can run on the same platform as the executable being debugged:

Self-hosted debugging

Debugging in a self-hosted environment.

In this case, the debugger starts the debug agent, and then establishes its own communications channel to the debug agent.

Debugging in a cross-development environment

The debugger can run on one platform to debug executables on another:

Cross-development debugging

Debugging in a cross-development environment.

In a cross-development environment, the host and the target systems must be connected via some form of communications channel.

The two components, the debugger and the debug agent, perform different functions. The debugger is responsible for presenting a user interface and for communicating over some communications channel to the debug agent. The debug agent is responsible for controlling (via the /proc filesystem) the process being debugged.

All debug information and source remains on the host system. This combination of a small target agent and a full-featured host debugger allows for full symbolic debugging, even in the memory-constrained environments of small targets.


In order to debug your programs with full source using the symbolic debugger, you'll need to tell the C compiler and linker to include symbolic information in the object and executable files. For details, see the qcc docs in the Utilities Reference. Without this symbolic information, the debugger can provide only assembly-language-level debugging.

The GNU debugger (gdb)

The GNU debugger is a command-line program that provides a very rich set of options. You'll find a tutorial-style doc called "Using GDB" as an appendix in this manual.

Starting gdb

You can invoke gdb by using the following variants, which correspond to your target platform:

For this target: Use this command:
ARM ntoarm-gdb
Intel ntox86-gdb
MIPS ntomips-gdb
PowerPC ntoppc-gdb
SH4 ntosh-gdb

For more information, see the gdb entry in the Utilities Reference.

The process-level debug agent

When a breakpoint is encountered and the process-level debug agent (pdebug) is in control, the process being debugged and all its threads are stopped. All other processes continue to run and interrupts remain enabled.

Note: To use the pdebug agent, you must set up pty support (via devc-pty) on your target.

When the process's threads are stopped and the debugger is in control, you may examine the state of any thread within the process. You may also "freeze" all or a subset of the stopped threads when you continue. For more info on examining thread states, see your debugger docs.

The pdebug agent may either be included in the image and started in the image startup script or started later from any available filesystem that contains pdebug.

The pdebug command-line invocation specifies which device will be used. (Note that for self-hosted debugging, pdebug is started automatically by the host debugger.)

You can start pdebug in one of three ways, reflecting the nature of the connection between the debugger and the debug agent:

Serial connection

If the host and target systems are connected via a serial port, then the debug agent (pdebug) should be started with the following command:

pdebug devicename[,baud]

This indicates the target's communications channel (devicename) and specifies the baud rate (baud).

For example, if the target has a /dev/ser2 connection to the host, and we want the link to be 115,200 baud, we would specify:

pdebug /dev/ser2,115200

Serial debugging

Running the process debug agent with a serial link at 115200 baud.

The Neutrino target requires a supported serial port. The target is connected to the host using either a null-modem cable, which allows two identical serial ports to be directly connected, or a straight-through cable, depending on the particular serial port provided on the target.

The null-modem cable crosses the Tx/Rx data and handshaking lines. In our PowerPC FADS example, you'd use a a straight-through cable. Most computer stores stock both types of cables.

Null-modem cable pinout

Null-modem cable pinout.

TCP/IP connection

If the host and the target are connected via some form of TCP/IP connection, the debugger and agent can use that connection as well. Two types of TCP/IP communications are possible with the debugger and agent: static port and dynamic port connections (see below).

The Neutrino target must have a supported Ethernet controller. Note that since the debug agent requires the TCP/IP manager to be running on the target, this requires more memory.

This need for extra memory is offset by the advantage of being able to run multiple debuggers with multiple debug sessions over the single network cable. In a networked development environment, developers on different network hosts could independently debug programs on a single common target.

Multiple hosts debugging a single target

Several developers can debug a single target system.

TCP/IP static port connection

For a static port connection, the debug agent is assigned a TCP/IP port number and will listen for communications on that port only. For example, the pdebug 1204 command specifies TCP/IP port 1204:

TCP/IP static port debugging

Running the process debug agent with a TCP/IP static port.

If you have multiple developers, each developer could be assigned a specific TCP/IP port number above the reserved ports 0 to 1024.

TCP/IP dynamic port connection

For a dynamic port connection, the debug agent is started by inetd and communicates via standard input/output. The inetd process fetches the communications port from the configuration file (typically /etc/services). The host process debug agent connects to the port via inetd -- the debug agent has no knowledge of the port.

The command to run the process debug agent in this case is simply as follows (from the inetd.conf file):

pdebug -

TCP/IP dynamic port debugging

For a TCP/IP dynamic port connection, the inetd process will manage the port.

Note that this method is also suitable for one or more developers.

Sample boot script for dynamic port sessions

The following boot script supports multiple sessions specifying the same port. Although the port for each session on the pdebug side is the same, inetd causes unique ports to be used on the debugger side. This ensures a unique socket pair for each session.

Note that inetd should be included and started in your boot image. The pdebug program should also be in your boot image (or available from a mounted filesystem).

The config files could be built into your boot image (as in this sample script) or linked in from a remote filesystem using the [type=link] command:

[type=link] /etc/services=/mount_point/services
[type=link] /etc/inetd.conf=/mount_point/inetd.conf

Here's the boot script:

[virtual=x86,bios +compress] boot = {
    startup-bios -N node428
    PATH=/proc/boot:/bin:/apk/bin_nto:./ procnto

[+script] startup-script = {
# explicitly running in edited mode for the console link
    devc-ser8250 -e -b115200 &
    display_msg Welcome to Neutrino on a PC-compatible BIOS system 
# tcp/ip with a NE2000 Ethernet adaptor
    io-net -dne2000 -pttcpip if=ndi0: &
    waitfor /dev/socket
    inetd &
    pipe &
# pdebug needs devc-pty and esh    
    devc-pty &
# NFS mount of the Neutrino filesystem
    fs-nfs2 -r 10.89:/x86 /x86 -r 10.89:/home /home & 
# CIFS mount of the NT filesystem
    fs-cifs -b //QA: /QAc apkleywegt 123 & 
# NT Hyperterm needs this to interpret backspaces correctly
    stty erase=08
    reopen /dev/console
    [+session] esh &

[type=link] /usr/lib/
[type=link] /lib=/x86/lib
[type=link] /tmp=/dev/shmem         # tmp points to shared memory
[type=link] /dev/console=/dev/ser2  # no local terminal
[type=link] /bin=/x86/bin           # executables in the path 
[type=link] /apk=/home/apkleywegt   # home dir

[perms=+r,+x]          # Boot images made under MS-Windows
                       # need to be reminded of permissions.

[data=copy]            # All executables that can be restarted
                       # go below.
                       # Data files are created in the named 
                       # directory.
/etc/hosts = {    localhost
10.89        node89
10.222       node222
10.326       node326   QA node437
10.241       APP_ENG_1

/etc/services = {
ftp           21/tcp
telnet        23/tcp
finger        79/tcp
pdebug        8000/tcp

/etc/inetd.conf = {
ftp     stream    tcp    nowait    root    /bin/fdtpd      fdtpd
telnet  stream    tcp    nowait    root    /bin/telnetd    telnetd
finger  stream    tcp    nowait    root    /bin            fingerd
pdebug  stream    tcp    nowait    root    /bin/pdebug     pdebug -

A simple debug session

In this example, we'll be debugging our "Hello, world!" program via a TCP/IP link. We go through the following steps:

Configure the target

Let's assume an x86 target using a basic TCP/IP configuration. The following lines (from the sample boot file at the end of this chapter) show what's needed to host the sample session:

io-net -dne2000 -pttcpip if=ndi0: &
devc-pty &
[+session] pdebug 8000 &

The above specifies that the host IP address is (or 10.428 for short). The pdebug program is configured to use port 8000.

Compile for debugging

We'll be using the x86 compiler. Note the -g option, which enables debugging information to be included:

$ qcc -V gcc_ntox86 -g -o hello hello.c  

Start the debug session

For this simple example, the sources can be found in our working directory. The gdb debugger provides its own shell; by default its prompt is (gdb). The following commands would be used to start the session. To reduce document clutter, we'll run the debugger in quiet mode:

# Working from the source directory:
    (61) con1 /home/allan/src >ntox86-gdb -quiet

# Specifying the target IP address and the port 
# used by pdebug:
    (gdb) target qnx 10.428:8000
    Remote debugging using 10.428:8000
    0x0 in ?? ()

# Uploading the debug executable to the target:
# (This can be a slow operation. If the executable
# is large, you may prefer to build the executable
# into your target image.)
# Note that the file has to be in the target system's namespace,
# so we can get the executable via a network filesystem, ftp,
# or, if no filesystem is present, via the upload command.

    (gdb) upload hello /tmp/hello

# Loading the symbolic debug information from the
# current working directory:
# (In this case, "hello" must reside on the host system.)

    (gdb) sym hello
    Reading symbols from hello...done.

# Starting the program:
    (gdb) run /tmp/hello
    Starting program:  /tmp/hello
    Trying to find symbol file for
    Retrying dynamic interpreter in
# Setting the breakpoint on main():
    (gdb) break main
    Breakpoint 1 at 0x80483ae: file hello.c, line 8.
# Allowing the program to continue to the breakpoint 
# found at main():
    (gdb) c
    Breakpoint 1, main () at hello.c:8
    8       setprio (0,9);
# Ready to start the debug session.

Get help

While in a debug session, any of the following commands could be used as the next action for starting the actual debugging of the project:

Next instruction
List the next set of instructions
Get the help main menu
help data
Get the help data menu
help inspect
Get help for the inspect command
inspect y
Inspect the contents of variable y
set y=3
Assign a value to variable y
Get a back trace.

Let's see how to use some of these basic commands.

# list command:
    (gdb) l
    4   main () {
    6       int x,y,z;
    8       setprio (0,9);
    9       printf ("Hi ya!\n");
    11      x=3;
    12      y=2;

# press <enter> repeat last command:
    (gdb) <enter>
    13      z=3*2;
    15      exit (0);
    17  }

# break on line 11:
   (gdb) break 11
   Breakpoint 2 at 0x80483c7: file hello.c, line 11.

# continue until the first break point:
    (gdb) c
    Hi ya!

    Breakpoint 2, main () at hello.c:11
    11      x=3;

# Notice that the above command went past the
# printf statement at line 9. I/O from the
# printf statement is displayed on screen.

# inspect variable y, using short form of the
# inspect command.
    (gdb) ins y
    $1 = -1338755812
# get some help on step and next commands:
    (gdb) help s
    Step program until it reaches a different source line.
    Argument N means do this N times (or till program stops
    for another reason).
    (gdb) help n
    Step program, proceeding through subroutine calls.
    Like the "step" command as long as subroutine calls do not
    happen; when they do, the call is treated as one instruction.
    Argument N means do this N times (or till program stops
    for another reason).

# go to the next line of execution:
    (gdb) n
    12      y=2;
    (gdb) n
    13      z=3*2;
    (gdb) inspect z
    $2 = 1
    (gdb) n
    15      exit (0);
    (gdb) inspe z
    $3 = 6

# continue program execution:
    (gdb) continue

    Program exited normally.

# quit the debugger session:    
    (gdb) quit
    The program is running. Exit anyway? (y or n) y
    (61) con1 /home/allan/src >

Sample boot image

[virtual=x86,bios +compress] boot = {
    startup-bios -N node428
    PATH=/proc/boot:./ procnto

[+script] startup-script = {
# explicitly running in edited mode for the console link
    devc-ser8250 -e -b115200 &
    display_msg Welcome to Neutrino on a PC-compatible BIOS system 
# tcp/ip with a NE2000 Ethernet adaptor
    io-net -dne2000 -pttcpip if=ndi0: &
    waitfor /dev/socket
    pipe &
# pdebug needs devc-pty 
    devc-pty &
# starting pdebug twice on separate ports
    [+session] pdebug 8000 &

[type=link] /usr/lib/
[type=link] /lib=/x86/lib
[type=link] /tmp=/dev/shmem         # tmp points to shared memory
[type=link] /dev/console=/dev/ser2  # no local terminal

[perms=+r,+x]         # Boot images made under MS-Windows need
                      # to be reminded of permissions.

[data=copy]            # All executables that can be restarted
                       # go below.