|
|
@@ -0,0 +1,118 @@ |
|
|
|
--- |
|
|
|
title: "CTF + ARMeb + debugging" |
|
|
|
description: > |
|
|
|
CTF + ARMeb + debugging |
|
|
|
created: !!timestamp '2014-03-05' |
|
|
|
time: 7:21 PM |
|
|
|
tags: |
|
|
|
- ctf |
|
|
|
- debugging |
|
|
|
--- |
|
|
|
|
|
|
|
I've been working on making the AVILA board work again with FreeBSD. |
|
|
|
Thanks to Jim from Netgate for sending me a board to do this work. |
|
|
|
|
|
|
|
I still have a pending patch waiting to go through bde to fix an |
|
|
|
unaligned off_t store which gets things farther, but with the patch I'm |
|
|
|
getting a: `panic: vm_page_alloc: page 0xc0805db0 is wired` shortly after |
|
|
|
the machine launches the daemons. |
|
|
|
|
|
|
|
I did work to get cross gdb working for armeb (committed in r261787 and |
|
|
|
r261788), but that didn't help as there is no kernel gdb support on |
|
|
|
armeb. As I'm doing this debugging over the network, I can't dump a |
|
|
|
core. |
|
|
|
|
|
|
|
I didn't feel like hand decoding a struct vm_page, so I thought of other |
|
|
|
methods, and one way is to use CTF to parse the data type and decode the |
|
|
|
data. I know python and ctypes, so I decided to wrap libctf and see |
|
|
|
what I could do. |
|
|
|
|
|
|
|
Getting the initial python wrapper working was easy, but my initial test |
|
|
|
data was the kernel on my amd64 box that I am developing on. Now I |
|
|
|
needed to use real armeb CTF data. I point it to my kernel, and I get: |
|
|
|
"`File uses more recent ELF version than libctf`". Ok, extract the CTF |
|
|
|
data from the kernel (ctf data is stored in a section named `.SUNW_ctf`) |
|
|
|
and work on that directly: |
|
|
|
``` |
|
|
|
$ objcopy -O binary --set-section-flags optfiles=load,alloc -j .SUNW_ctf /tftpboot/kernel.avila.avila /dev/null |
|
|
|
objcopy: /tftpboot/kernel.avila.avila: File format not recognized |
|
|
|
``` |
|
|
|
|
|
|
|
Well, ok, that's not too surprising since it's an ARMEB binary, lets try: |
|
|
|
``` |
|
|
|
$ /usr/obj/arm.armeb/usr/src.avila/tmp/usr/bin/objcopy -O binary --set-section-flags optfiles=load,alloc -j .SUNW_ctf /tftpboot/kernel.avila.avila /tmp/test.avila.ctf |
|
|
|
$ ls -l /tmp/test.avila.ctf |
|
|
|
-rwxr-xr-x 1 jmg wheel 0 Mar 5 17:59 /tmp/test.avila.ctf |
|
|
|
``` |
|
|
|
|
|
|
|
Hmm, that didn't work too well, ok, lets just use dd to extract the data |
|
|
|
using info from `objdump -x`. |
|
|
|
|
|
|
|
Ok, now that I've done that, I get: |
|
|
|
``` |
|
|
|
ValueError: '/tmp/avila.ctf': File is not in CTF or ELF format |
|
|
|
``` |
|
|
|
|
|
|
|
Hmm, why is that? Well, it turns out that the endian of the CTF data |
|
|
|
is wrong. The magic is `cf f1`, but the magic on amd64 is f1 cf, it's |
|
|
|
endian swapped. That's annoying. After spending some time trying to |
|
|
|
build an cross shared version of libctf, I find that it has the same |
|
|
|
issue. |
|
|
|
|
|
|
|
After a bit of looking around, I discover the CTF can only ever read |
|
|
|
native endianness, but `ctfmerge` has a magic option that will write out |
|
|
|
endian swapped data if necessary depending upon the ELF file it's |
|
|
|
putting in. This means that the CTF data in an armeb object file will |
|
|
|
be different depending upon the endian you compiled it on, so the object |
|
|
|
file isn't cross compatible. But, this does mean that the data in the |
|
|
|
object files will be readable by libctf, just not the data written into |
|
|
|
the kernel. |
|
|
|
|
|
|
|
So, I create a sacrificial amd64 binary: |
|
|
|
``` |
|
|
|
$ echo 'int main() {}' | cc -o /tmp/avila2.ctf -x c - |
|
|
|
``` |
|
|
|
|
|
|
|
And use `ctfmerge` to put the data in it: |
|
|
|
``` |
|
|
|
$ ctfmerge -L fldkj -o /tmp/avila2.ctf /usr/obj/arm.armeb/usr/src.avila/sys/AVILA/*.o |
|
|
|
``` |
|
|
|
|
|
|
|
and again use `dd` to extract the `.SUNW_ctf` section into a separate file. |
|
|
|
|
|
|
|
With all this work, I finally have the CTF data in a format that libctf |
|
|
|
can parse, so, I try to parse some data. Now the interesting thing is |
|
|
|
that the CTF data does encode sizes of integers, but it uses the native |
|
|
|
arch's pointer sizes for `CTF_K_POINTER` types, which means that pointers |
|
|
|
appear to be 8 bytes in size instead of the correct 4 bytes. A little |
|
|
|
more hacking on the ctf.py script to force all pointers to be 4 bytes, |
|
|
|
and a little help to convert ddb output to a string and finally, I have |
|
|
|
a dump of the struct vm_page that I was trying to get all along: |
|
|
|
``` |
|
|
|
{'act_count': '\x00', |
|
|
|
'aflags': '\x00', |
|
|
|
'busy_lock': 1, |
|
|
|
'dirty': '\xff', |
|
|
|
'flags': 0, |
|
|
|
'hold_count': 0, |
|
|
|
'listq': {'tqe_next': 0xc0805e00, 'tqe_prev': 0xc06d18a0}, |
|
|
|
'md': {'pv_kva': 3235856384, |
|
|
|
'pv_list': {'tqh_first': 0x0, 'tqh_last': 0xc0805de0}, |
|
|
|
'pv_memattr': '\x00', |
|
|
|
'pvh_attrs': 0}, |
|
|
|
'object': 0xc06d1878, |
|
|
|
'oflags': '\x04', |
|
|
|
'order': '\t', |
|
|
|
'phys_addr': 17776640, |
|
|
|
'pindex': 3572, |
|
|
|
'plinks': {'memguard': {'p': 0, 'v': 3228376932}, |
|
|
|
'q': {'tqe_next': 0x0, 'tqe_prev': 0xc06d1f64}, |
|
|
|
's': {'pv': 0xc06d1f64, 'ss': {'sle_next': 0x0}}}, |
|
|
|
'pool': '\x00', |
|
|
|
'queue': '\xff', |
|
|
|
'segind': '\x01', |
|
|
|
'valid': '\xff', |
|
|
|
'wire_count': 1} |
|
|
|
``` |
|
|
|
|
|
|
|
So, the above was produced w/ the final [ctf.py](https://www.funkthat.com/~jmg/ctf.py) script. |