0

Quick Introduction to Remote Debugging via gdb

I have been recently doing some remote apps debugging on Linux-based systems i.e host is on Ubuntu and target machine is an Android-based device and I got to learn some of the gdb remote debugging features along the way so I wanted to write a quick post about it for newbies who could struggle or for those who’are so used to Visual Studio & Windows platform like myself!

Let me get some basic terminology out of the way:

  • Host machine: The machine on which you’re debugging your application. It’s also where you generally keep the unstripped binaries.
  • Target device: The device which your app that you want to debug runs on. That could be any kind of computing platform such as an Android tablet.
  • Executable: This is the binary of your app which we’ll later use to attach gdb to it.
  • gdbserver: This is what we’re gonna use to let our host & target machines communicate. It must be installed on the target device beforehand.

The simple way we’re gonna realize the remote debugging is simply this:

  1. Debug-build your app with optimizations off, assers enabled, etc.
  2. Run gdbserver on target machine to connect app being run to the host machine gdb
  3. Run gdb on host machine to attach it to our app running on target device
  4. Debug happily!

Now let’s imagine that you have already debug-built your app on your host machine  to run it on your target device, let’s go with an Android device for our example. If your target device is a tiny embedded system or doesn’t have much of storage which can’t hold your debug-built app, you can unstrip the binary before pushing it to the device:

strip my_binary my_stripped_binary

adb push my_stripped_binary data/local/data/my_app/ # Push it to Android device

We’re ready to go!

On our target Android device:

cd data/local/data/my_app/

gdbserver :DEBUG_PORT my_stripped_binary app_arguments

Here, we cd into the app directory and call gdbserver with DEBUG_PORT through which we’re going to remote debug, our binary and its arguments. DEBUG_PORT should be an open port so if you’re in an office network or such, check what’s available!

On our host machine:

cd my_apps_directory

gdb

file my_binary # The unstripped debug-built binary

target remote localhost:DEBUG_PORT

b main # Set a breakpoint in main() function

b my_function_to_debug.cpp:100 # Here we set a breakpoint in a FILE:LINE pattern

continue # We found the problem, continue running app

If you try to debug an app now, you’ll hopefully find out that your apps on different machines connect to each other through gdb/gdbserver pair yet we cannot really see anything meaningful because we’re missing an essential piece of info here; gdbinit file.

At any time, to see which libraries have been loaded by gdb, enter following in gdb console:

info sharedlibrary # Lists all libraries the app refers to

This file basically describes under what kind of debugging configuration we’re gonna run our app. You can find lots of info about it on Google but main things your .gdbinit file should cover are where your debug-built libraries are so gdb can point us out to corresponding source, assembly style, which signals to ignore or stop at, etc. I generally create slightly different .gdbinit files for each app and put the one under $HOME/ which I currently want to use with gdb. Otherwise, you need to put it alongside your app on target device. I intentionally don’t want to go much into details of .gdbinit file here because for me, it’s how I figured out much of the gdb stuff so I think if you tackle with its usage, what flag does what and all, you’ll get the gist of gdb remote debugging pretty quickly. Otherwise, shoot me a question here and we’ll see what the problem is, maybe!

That kind of does it here. As you may guess, there’s much more to learn and also much more which can go wrong but gdb is heavily used so it’s well-covered all over the internet in fine details.

Any suggestion and corrections are more than welcome! Thank you for your time!

0

Musings on Raytracing vs Rasterisation

Intend: While working on a simple raytracer project , I got to think about the pros and cons of current rendering techniques, namely rasterisation and raytracing so wanted to write down a few points as a reminder for future me.

Although most of my current knowledge on 3D graphics is based on rasterisation algorithms, I did learn about raytracing before for sure yet never come to implement it from scratch. I have to say, it’s a pure joy as I have a background on math (graduated from mathematical engineering) and aytracing is all about tracing the natural steps of light-matter interaction.

So what’s raytracing anyways? Simply put, it’s the process of rendering objects in 3D by shooting rays from a virtual camera towards the scene and coming up with a color value for the current pixel, given the material properties of the object that this very pixel is a part of. What kind of material properties for example? you may ask. It can be the type of material such as glass, water, sand or reflectivitiy and transparency coefficients. Due to its nature, features that are hard or costy to implement with current real-time rendering techniques such as soft shadows, global illumination etc. are inherently easier with it. The main benefit of raytracing comes from the fact that it holds the overall scene data within every cell, which is also the reason why it’s waaaay too slower compared to rasterisation.

So if raytracing is so nice but a bit slower, can’t we just use it with faster computers soon for mind-blowingly beautiful real-time rendering? The thing is, eventhough there exist dozen spatial acceleration structures and optimizations for better data coherency, it’s still nowhere near the rasterisation on speed. The reason being is what makes raytraced scenes look so natural and beautiful, scene structure. Rasterisation simply works with primitives such as triangles at its deepest and requires just a little bit information on current pixel to shade it. We can currently render millions of vertices on 60 FPS on an average PC GPU whereas raytracing can take as many time as days or even weeks with multi-pc and threading setup!

I think smart rendering people of the past made quite a choice going (read: forcedly going) with rasterisation instead of raytracing eventhough it is sometimes absurd how cheap approximations are made to compensate performance to implement a much-natural feature in raytracing. Given that there is nowadays some steps taken towards special raytracing-based hardware, my hopes are high for that in a mid-term we can see a simple game with amazing content on PCs.

Cheers!

0

Compiling Simple x64 Assembly Files on Windows

While reading through that fast Introduction to x64 Assembly here, I wanted to simply compile the example without bothering with Visual Studio project settings stuff so I wrote that below no-brainer Windows batch to compile x64 Assembly!

@echo off
cd [PATH_TO_OUTPUT_DEST]
set LIB=%LIB%;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Lib\x64; #v7.1A can be 7.0A/8.0A/8.1A WRT Windows version
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\ml64" [PATH_TO_ASM_FILE]\hello.asm /link /subsystem:windows /defaultlib:kernel32.lib /defaultlib:user32.lib /entry:[NAME_OF_ENTRY_POINT]
pause # Keep console open to see the output

Go and try it:
extrn ExitProcess: PROC
extrn MessageBoxA: PROC
.data
caption db '64-bit hello!', 0
message db 'Hello World!', 0
.code
Start PROC
sub rsp,28h
mov rcx, 0
lea rdx, message
lea r8, caption
mov r9d, 0
call MessageBoxA
mov ecx, eax
call ExitProcess
Start ENDP
End

If you save the code above with [NAME_OF_ENTRY_POINT] as Start into hello.asm assigning in batch [PATH_TO_OUTPUT_DEST] and [PATH_TO_ASM_FILE] to where it’s saved, calling the batch above should produce hello.exe. Simply run the program and see the magic!

Enjoy!

0

Hello, World!

Hi, there!

There goes my first attempt at creating a simple personal blog to show my current projects, works and thoughts about any possible thing I’ll think of along the way.

Stay tuned!