Aller au contenu | Aller au menu | Aller à la recherche

vendredi, janvier 11 2013

How to un-ubinize an UBI image?

Ubinize allows to make an UBI image from a set of file systems (only currently, only ubifs are supported). I had to do opposite: extract volume from from an UBI image.

Our UBI image is splited in Physical Erase Block (PEB) (= "Erase Blocks" on mtd devices). Each PEB begin with a magic number of 4 characters : "UBI#". It give us an hint of PEB size. PEB begin with a ubi_ec_hdr structure (see ubi-media.h). This structure contains two important values for us:

  • offset to a second structure called ubi_vid_hdr (see also ubi-media.h)
  • offset to data itself (also called Logical Erase Block (LEB))

Structure ubi_vid_hdr contains the volume id and logical block number. To extract volume from an UBI image, we have to scan image and, reorder and concatenate all LEB which own right volume id.

How to know correct volume id? There are PEB with special volume id : 0x7FFFEFFF. These PEB contain association table between name and volume id. You can use ubinfo to parse this table and get volume ids.

When we work on a freshly generated UBI image, there are some simplifications:

  • LEB are already in correct order
  • Volume are ordonned on image
  • PEB 0x7FFFEFFF are the two first of image

If you do same thing on a real UBI device (or a dump), you have to scan all PEB to found them. In add, you should take care of ubi_vid_hdr->sqnum in case you find to LEB with same logical block number.

One last thing: all data are big endian.

So, we can write this code:

#include "ubi-media.h"
lnum = 0
while  > 0) {
        ec_hdr = (struct ubi_ec_hdr *) buf;
        vid_hdr = (struct ubi_vdr_hdr *) (buf + be32toh(ec_hdr->vid_hdr_offset));
        if (be32toh(ec_hdr->magic) != UBI_EC_HDR_MAGIC) {
              error(0, 0, "Bad EC_HDR magic number");
        if (be32toh(vid_hdr->magic) != UBI_VID_HDR_MAGIC) {
        	error(0, 0, "Bad VID_HDR magic number");
        if (be32toh(vid_hdr->vol_id) == vol) {
                if (be32toh(vid_hdr->lnum) != lnum) {
                        error(0, 0, "spared logical block are not supported");
                write(fdout, buf +be32toh(ec_hdr->data_offset), size - be32toh(ec_hdr->data_offset));

jeudi, décembre 20 2012

Provide a standalone toolchain using buildroot

Buildroot is great to make a full Linux system from sources. Nevertheless, the toolchain is a special part in building process:

  • Compilation is very long
  • Compilation is difficult to stop and restart
  • We rarely recompile toolchain
  • We want to provide toolchain as a standalone package without Buildroot

Some advice about toolchain making under Buildroot:

  1. Use crosstool-ng. Buildroot support internal toolchain building, but crosstool-ng is specialized in toolchain building. Toolchain depends less of their installation path and there are more options
  2. Use build out of tree (pass O= option to make). Since toolchains compilation is long, always test in a new fresh directory without break your current work.
  3. Change CONFIG_BR2_HOST_DIR. Toolchains depend often of their build path. It is not great to ask to end user to copy toolchain in /home/toto/mywork/buildroot/unstable_dontdeliver/host. /opt/arm-mycompagny-linux-eabi is a far better choice.
  4. $BR2_HOST_DIR will contain a full BSP. Just do "make uclibc" if you only want toolchain with headers from C library. Make an archive just after this step
  5. Write two configurations for your board: One compile toolchain and other one use external toolchain.
  6. Once your rootfs boot, add dropbear to your system and run gcc testsuite. It is really easy to run and guaranty again some bugs like badly configured floating units, bad exception catching, etc...

jeudi, décembre 6 2012

Formations Noyau Linux

Vous trouverez ici les supports de formation Noyau Linux dispensé par notre société, Sysmic:

Vous pouvez me contacter pour obtenir les sources et le matériel pédagogique de cette formation.

dimanche, juillet 29 2012

how to get absolute timestamps from output of dmesg?

First, how to know since when my system was started?

We can use uptime, but it only give for how much time it is up.

date can do conversion for us:

 date -d "now - $(cut -d ' ' -f 1 /proc/uptime) seconds"

And now, how to get absolute time in output of dmesg. date can add timestamps to date of boot of system. We only have to add some regular expression (using perl):

 dmesg | perl -pe '
   $DATE=`date -d "$(cut -d \ -f 1 /proc/uptime) seconds ago"` ; 
 if (/^[( *\d*.\d*)]/) {
   $time = $1;
   $newtime = `date -d "$DATE + $time seconds"`;
   chop $newtime;


Which give us:

 [Sun Jul 29 01:12:05 CEST 2012] eth0: no IPv6 routers present
 [Sun Jul 29 01:12:06 CEST 2012] init: plymouth-stop pre-start process (2220) terminated with status 1
 [Sun Jul 29 01:12:11 CEST 2012] NVRM: Xid (0000:02:00): 56, CMDre 00000000 0000089c 0100cb14 00000007 00000000
 [Sun Jul 29 13:21:06 CEST 2012] ISO 9660 Extensions: Microsoft Joliet Level 3
 [Sun Jul 29 13:21:06 CEST 2012] ISO 9660 Extensions: RRIP_1991A

vendredi, avril 13 2012

The simplest Makefile you can write

My students don't use Makefile, because they it is too difficult to write. Indeed, all samples of Makefile I saw on internet are complex. This is my suggestion for a quick setup:

  bin: bin.o dependency1.o dependency2.o

And that's all! Make knows how to links object files together if one these objects has same name than final binary. And sure, it know how to convert source files to objects.

Event out-of-source build work with this Makefile:

 $ echo " bin: bin.o dependency1.o dependency2.o" >  Makefile
 $ mkdir out
 $ cd out
 $ make -f ../Makefile VPATH=..
 cc    -c -o bin.o ../bin.c
 cc    -c -o dependency1.o ../dependency1.c
 cc    -c -o dependency2.o ../dependency2.c
 cc   bin.o dependency1.o dependency2.o   -o bin

And cross-compile:

 $ mkdir out-arm
 $ cd out-arm
 $ make -f ../Makefile VPATH=.. CC=arm-linux-gcc
 arm-linux-gcc    -c -o bin.o ../bin.c
 arm-linux-gcc    -c -o dependency1.o ../dependency1.c
 arm-linux-gcc    -c -o dependency2.o ../dependency2.c
 arm-linux-gcc   bin.o dependency1.o dependency2.o   -o bin

Sure this Makefile lacks of "clean" rule, dependencies with headers, etc... but it enough for a quick test.

mardi, novembre 8 2011

Formations Linux embarqué

Vous trouverez ici les supports de formation Linux embarqué dispensé par notre société, Sysmic:

Vous pouvez me contacter pour obtenir les sources et le matériel pédagogique de cette formation.

Support de TP de temps réel

Pour faire suite au billet précédent, je publie le TP de temps réel associé à mon cours. Je ne propose pas à mes étudiants la totalité des exercice. J'en selectionne les exercice en fonction des besoins (d'autant plus que leur intérêt est très inégal).

Vous pouvez prendre contact avec moi si vous souhaite obtenir les correction et les sources LaTeX de ce support de TP.

Cours de temps réel

Vous trouverez ci-dessous le support du cours de temps réel que j'enseigne dans divers établisements:

Bien entendu, ca reste fidèle à moi-même: très orienté sur Linux.

Je fournis aussi les sources LaTeX. Bien que ce cours soit sous creative commons, je vous demande de prendre contact avec moi si vous souhaitez le réutiliser. J'aime bien faire connaissance avec d'autres spécialistes.

lundi, septembre 5 2011

Change default umask in Ubuntu

ajouter "session optional umask=0002" au fichier


Modifiez la valeur de umask dans /etc/profile

mercredi, août 24 2011

Process communication in shell: fifo, redirection and coproc

How to communicate easily between process in a shell script?

We can use fifo:

$ mkfifo my_fifo
$ cat my_fifo &
[1] 7266
$ echo foo > my_fifo
[1]  + done       cat my_fifo

As you can see, cat finish as soon as something is written in fifo. We can work around this behavior by using tail -f instead of cat. In this case, tail -f will reopen fifo.

We may use two fifos to communicate with ssh for example. But we will have same problem than with cat :

$ mkfifo fifo1 fifo2
$ ssh host >fifo1 <fifo2 &
[1] 7266
$ echo ls > fifo2
[1]  + done       ssh host > fifo1 < fifo2

The other problem is writing in a fifo is blocking while there no process in the other end:

 $ echo foo > my_fifo
 [block until someone read my_fifo]

A good way would be to keep the file open during all the time.

On another hand, Your shell provide a way to open file descriptors and link them with a file:

$ exec 5>file
$ exec 6<file

It could be useful to read a file line by line:

 $ exec 6<file
 $ while read A; do
    [[ $A == "}" ]] && break
 done <&6
 $ read A <&6
 $ echo $A
 Line just after first closed brace

(You can also use syntax read -u 6 A)

Opening file descriptor in shell is really useful when used with fifo, because it will keep fifo opened during all session:

$ mkfifo my_fifo
$ exec 5<>my_fifo
$ perl -pe '$d=`date`; chop $d; s/^(.*)$/$d $1/' <&5 &
[1] 7266
$ echo foo >&5
Wed Aug 24 22:01:47 CEST 2011 toto

We can now use fifos with file descriptors to communicate correctly with a ssh session:

$ mkfifo fifo1 fifo2
$ exec 5<>fifo1 6<>fifo2
$ ssh <&5 >&6 &
[1] 7266
$ echo ls >&5
$ cat <&6

It is a little restrictive to have to create two fifo. Shell provide a function called coproc. It allows to execute a command in background while communicating with it using file descriptor (equivalent of popen2 in perl). The file descriptor is accessible with &p (following code is for zsh):

$ coproc cat
[1] 1272
$ exec 5>&p
$ exec 6<&p
$ coproc cat -n
[2] 1291
$ exec 7>&p
$ exec 8<&p
$ print -u 5 foo
$ print -u 7 bar
$ read -u 6 A
$ read -u 8 B
$ echo $A
$ echo $B
1       bar

Note that equivalent code under bash give a warning, but it also work.

Finally, you can allocate file descriptor dynamically using {NAME} syntax:

$ coproc cat
[1] 1272
$ exec {CAT_IN}>&p
$ exec {CAT_OUT}<&p
$ coproc cat -n
[2] 1291
$ exec {CATN_IN}>&p
$ exec {CATN_OUT}<&p
$ print foo >&{CAT_IN}
$ print bar >&{CATN_IN}
$ read A <&{CAT_OUT}
$ read B <&{CATN_OUT}
$ echo $A
$ echo $B
1       bar

samedi, mai 28 2011

These cpp macros you always forget

I always forget how to stringify and to concatenate values with cpp. These functions have particularity to need to called two times because of weird behavior of cpp with prescan feature (prescan is not applied if macro contains # or ##).

Change a value in string:

 #define __STRINGIFY(X)      #X
 #define STRINGIFY(X)        __STRINGIFY(X)

Concatenate 2, 3 or 4 values:

 #define __CONCAT2(X, Y)       X ## Y
 #define CONCAT2(X, Y)        __CONCAT2(X, Y)
 #define __CONCAT3(X, Y, Z)    X ## Y ## Z
 #define CONCAT3(X, Y, Z)     __CONCAT3(X, Y, Z) 
 #define __CONCAT4(W, X, Y, Z) W ## X ## Y ## Z
 #define CONCAT4(W, X, Y, Z)  __CONCAT4(W, X, Y, Z)

Some tests:

  • Input
 #define _VAR1_ var1
 #define _VAR2_ var2
 #define _VAR3_ var3
 #define _VAR4_ var4
 #define RESULT CONCAT3(text, _VAR1_, _VAR2_)
 CONCAT2(_VAR1_, _VAR2_)
 CONCAT3(_VAR1_, _VAR2_, _VAR3_)
 CONCAT4(_VAR1_, _VAR2_, _VAR3_, _VAR4_)
 CONCAT4(_VAR1_, const, _VAR3_, const)
  • Output:

Source file there

jeudi, mai 12 2011

How to consume a particular amount of CPU (or "a good PID sample usage")

In aim to test robustest of one of our products, we would like to consume a particular percent of CPU. This usage necessitate no specific hardware and is a good example of usage of PID in real world. How to achieve this?

We can use a loop to consume CPU and stop it after an amount of time:

 clock_gettime(CLOCK_REALTIME, &startwork);
 do {
     clock_gettime(CLOCK_REALTIME, &endwork);
 }  while (timediff(&endwork, &startwork) < x);

Now, we split time in periods of same size. Each period will be composed of two part: one to consume CPU (twork) and one to sleep (tsleep):

for (;;) {
  clock_gettime(CLOCK_REALTIME, &startwork);
  tsleep = period - twork;
  do {
      clock_gettime(CLOCK_REALTIME, &endwork);
  }  while (timediff(&endwork, &startwork) < twork);

period should be small enough to have smooth usage of CPU. Nevertheless, you choose it smaller than 1 / HZ, there were risk of overhead due to contexts switches. I suggest to use 1ms to 100ms.

we now need to compute twork. twork = period * objective looks a good start. Nevertheless, it is not robust. What happens if we are preempted during our loop or if our period does not run exact time? We need to compute exact amount of CPU used and correct twork. So, we need use famous a PID regulator:

 static float compute_correction(float objective) {
   struct timespec w;
   clock_gettime(CLOCK_REALTIME, &w);
   struct rusage u;
   getrusage(RUSAGE_SELF, &u);    
   static long time_prev = 0;
   long time_cur = w.tv_sec * 1000000  + w.tv_nsec / 1000;
   static long usage_prev = 0;
   long usage_cur = (u.ru_utime.tv_sec + u.ru_stime.tv_sec) * 1000000  + (u.ru_utime.tv_usec + u.ru_stime.tv_usec);
   static float Ep = 0.;
   static float Ei = 0.;
   static float Ed = 0.;
   static const float Kp = 0.2;
   static const float Ki = 0.2;
   static const float Kd = 0.;
   //not enough samples taken (it's the first one!)
   if (time_prev == 0) {
       usage_prev = usage_cur;
       time_prev = time_cur;
       return 0.;
   // Wait at least 25ms to be sure usage time is updated
   if (time_prev + 25000 > time_cur) 
       return  0.;
   Ed = (objective - (float) (usage_cur - usage_prev) / (float) (time_cur - time_prev)) - Ep;
   Ep += Ed;
   Ei += Ep;
   //printf("(p:%f i:%f d:%f), usage:(%ld / %ld)\n", Ep, Ei, Ed, (usage_cur - usage_prev), (time_cur - time_prev));
   usage_prev = usage_cur;
   time_prev = time_cur;
   return  Kp * (Ep + Ki * Ei + Kd * Ed);

Sure, values of Kp, Ki, and Kd could (should) be tuned.

We now have to add this correction to twork:

 out = objective;
 for (;;) {
       out += compute_correction(objective);
       //adjust work and sleep time slices
       twork = period * out;
       if (twork > period)
           twork = period;
       if (twork < 0)
           twork = 0;
       tsleep = period - twork;

To improve correctness, we also increase priority of our process:

   struct sched_param tSp;
   tSp.sched_priority = 90;
   if (sched_setscheduler(0, SCHED_RR, &tSp) < 0) 
       fprintf(stderr, "Warning: Unable to set Scheduler: %s (Are you root?)\n", strerror(errno));

Result can be found there

lundi, décembre 6 2010

Get only your errors from output of a command

Sometimes, you want to use your favorite code checker tool on an old piece software. Your tool will output many warnings. Only warning you have produced interest you.

Next trivial script take output of your tool as input and filter lines referenced in patch file given as parameter (your changes). Sure, this script could be easliy adapted to output of any tool.

# Syntax:
#   external_tool | ./ patch.diff
# Note only unified diff is supported

open(FILE, $ARGV[0]);

my %l;

my $file = "<Syntax error>";

# Fill a table with all chunk of modified lines 
while (<FILE>) {
    $file = $1 if (m/^\+\+\+ ([^\t ]*)/);
    if (m/^@@.*\+([0-9]+),([0-9]+) @@/) {
        push @{$l{$file}}, [ $1, $1 + $2 ];

# Dump table
#foreach $k (keys %l) {
#    foreach $i (0 .. $#{$l{$k}}) {
#        print join ":", $k, @{$l{$k}[$i]}; 
#        print "\n"
#    }

while (<STDIN>) {
    if (m/^([^:]+):([0-9]+)/) {
        if (defined $l{$1}) {
            foreach $i (0 .. $#{$l{$1}}) {
                print if ($2 >= $l{$1}[$i][0] and $2 <= $l{$1}[$i][1]);
    } else {
        # Print lines that doesn't looks like errors

vendredi, octobre 29 2010

Build cross-compiled kernel debian package

As you may know, you have juste to add options ARCH and if necessary CROSS_COMPILE to command line:

 make ARCH=powerpc CROSS_COMPILE=ppc-linux-gnu- XXX_defconfig
 make ARCH=powerpc CROSS_COMPILE=ppc-linux-gnu- XImage

My primary concern is to compile an x86 32bits kernel in 64bits environement. So my compilation line is:

 make ARCH=i386 i386_defconfig
 make ARCH=i386 bzImage

As you also may know, rule deb-pkg of kernel Makefile is able to create a debian package. You should use fakeroot to be able to create package with a non-root user:

 fakeroot make deb-pkg

Nevertheless, it will always create a package for current architecture. So this line will create a package for powerpc:

 fakeroot make ARCH=powerpc CROSS_COMPILE=ppc-linux-gnu- deb-pkg

To correct this behavior, you can use DEB_HOST_ARCH variable:

 fakeroot make DEB_HOST_ARCH=powerpc ARCH=powerpc CROSS_COMPILE=ppc-linux-gnu- deb-pkg

To summary, to quickly create an x86 32bits kernel debian package in 64bits environement. I do:

 mkdir build
 make ARCH=i386 O=build i386_defconfig
 fakeroot make -j4 DEB_HOST_ARCH=i386 ARCH=i386 O=build deb-pkg

lundi, octobre 4 2010

Pishing Hadopi

Vous avez surement entendu parler des différentes tentatives de pishing des mails Hadopi. Par conséquent, Hapodi a publié des article permettant de sensibiliser les gens sur problème. Je remarque que malgré toutes ces préconisation, il n'est pas possible d'authentifier à 100% un mail provenant d'Hadopi.

Je suis assez effaré qu'Hadopi, pseudo-spécialistes de la protection des droits numériques, n'imagine pas mettre en place un système de signatures de leurs mails...

jeudi, mai 27 2010

Linux dominera-t-il le monde?

PC INpact vient de publier un article sur l'évolution de Linux au cours des dernières années. Je suis tout à fait d'accord avec ce point de vue.

Une question reste toujours en suspend: "Combien y a-t-il de système Linux fonctionnant dans le monde?". Si la question est relativement simple pour les systèmes Desktop, elle est très complexe pour les systèmes embarqués: STB, routeurs, Android, Nokia, Archos, etc... Combien représentent-t-il?

vendredi, avril 16 2010

Orange approuve MeeGo

J'apprends dans un communiqué de presse que Orange est le premier opérateur de télécommunication à apporter son suppoer à MeeGo, la plateforme regroupant Moblin et Maemo.

Je suis assez surpris de la part d'Orange, mais ca me fait plaisir.

Connect to Bluetooth Access Point using Bluez framework

Bluez is packaged with a set very useful scripts. Exemple to connect to a Bluetooth access point:

$ /usr/share/doc/bluez/examples/test-discovery       
[ 00:22:FD:35:3D:74 ]
   Name = Nokia 5310 XpressMusic
   Paired = 0
   LegacyPairing = 1
   Alias = Nokia 5310 XpressMusic
   Address = 00:22:FC:38:3F:79
   RSSI = -63
   Class = 0x5a0204
   Icon = phone
$ /usr/share/doc/bluez/examples/test-device create 00:22:FC:38:3F:79
$ /usr/share/doc/bluez/examples/test-network 00:22:FC:38:3F:79 NAP &
Connected /org/bluez/1104/hci0/dev_00_22_FC_38_3F_79 to 00:22:FC:38:3F:79
Press CTRL-C to disconnect
$ sudo dhclient bnep0

(Tested on Kubuntu 10.4beta2)

jeudi, avril 8 2010

Convertir un fichier binaire en structure C en une ligne de shell

L'option -ede hexdump permet de formater la sortie à la mode printf:

$ hexdump -e '16/1 "0x%02X, " "\n"' < file
0x02, 0xB0, 0x2A, 0x00, 0x01, 0xF9, 0x00, 0x00, 0xE0, 0xC8, 0xF0, 0x13, 0x09, 0x11, 0x01, 0x00,
0xE0, 0x67, 0x00, 0x8C, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x21, 0x02,
0xE0, 0xC8, 0xF0, 0x00, 0x04, 0xE0, 0xC9, 0xF0, 0x00, 0x85, 0xC5, 0xE7, 0xDC, 0x  , 0x  , 0x  ,

16/1 Indique que l'on doit répéter le formatage suivant 16 fois et que l'on fait des pas de 1 octet. "0x%02X, " spécifie le format. Lorsque hexdump a terminé le premier format, il passe au suivant. Ici on affiche un saut de ligne. Hexdump boucle sur ce formatge jusqu'à la fin de l'entrée.

On peut remplacer \n par quelque chose d'un peu plus attrayant et ajouter une indentation:

$ hexdump -e '"\t" 16/1 "0x%02X, " " // %.7_ax\n"'  < file
        0x02, 0xB0, 0x2A, 0x00, 0x01, 0xF9, 0x00, 0x00, 0xE0, 0xC8, 0xF0, 0x13, 0x09, 0x11, 0x01, 0x00, // 0000000
        0xE0, 0x67, 0x00, 0x8C, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x21, 0x02, // 0000010
        0xE0, 0xC8, 0xF0, 0x00, 0x04, 0xE0, 0xC9, 0xF0, 0x00, 0x85, 0xC5, 0xE7, 0xDC, 0x  , 0x  , 0x  , // 0000020

Bon, il faut encore retirer les crottes à la fin de la sortie manuellement, mais le plus gros est fait.

mercredi, mars 24 2010

Convert keys between GnuPG, OpenSsh and OpenSSL

OpenSSH to OpenSSL

OpenSSH private keys are directly understable by OpenSSL. You can test for example:

openssl rsa -in ~/.ssh/id_rsa -text
openssl dsa -in ~/.ssh/id_dsa -text

You can also convert then to PEM format easily (notice, format for SSH private keys and PEM is very close):

openssl rsa -in ~/.ssh/id_rsa -out key_rsa.pem
openssl dsa -in ~/.ssh/id_dsa -out key_dsa.pem

So, you can directly use it to create a certification request:

openssl req -new -key ~/.ssh/id_dsa -out myid.csr

You can also use your ssh key to create a sef-signed certificate:

openssl x509 -req -days 3650 -in myid.csr -signkey ~/.ssh/id_rsa -out myid.crt

Notice I have not found how to manipulate ssh public key with OpenSSL

OpenSSL to OpenSSH

Private keys format is same between OpenSSL and OpenSSH. So you just a have to rename your OpenSSL key:

 cp myid.key id_rsa

In OpenSSL, there is no specific file for public key (public keys are generally embeded in certificates). However, you extract public key from private key file:

ssh-keygen -y -f  myid.key >

GnuPG to OpenSSH

First, you need to know fingerprint of your RSA key. You can use:

  gpg --list-secret-keys --keyid-format short

Next, you can use openpgp2ssh tool distributed in with monkeyshpere project:

 gpg --export-secret-keys 01234567 | openpgp2ssh 01234567 > id_rsa

A few notes are necessary:

  • 01234567 must be fingerprint of a RSA key (or subkey)
  • gpg --export-secret-keys also accept finger print of global key (in this case, it exports all sub-keys). However, openpgp2ssh only accept finger print of an RSA key
  • If no arguments are provided, openpgp2ssh export RSA keys it find

You can now extract ssh public key using:

ssh-keygen -y -f id_rsa >

GnuPG to OpenSSL

We already saw all steps. Extract key as for ssh:

  gpg --list-secret-keys --keyid-format short
  gpg --export-secret-keys 01234567 | openpgp2ssh 01234567 >

You can can convert this key to PEM format:

 openssl rsa -in myid.key -out myid.pem

You can create a certification request:

openssl req -new -key myid.key -out myid.csr

You can create a sef-signed certificate:

openssl x509 -req -days 3650 -in myid.csr -signkey myid.key -out myid.crt


Gpgsm utility can exports keys and certificate in PCSC12:

gpgsm -o  secret-gpg-key.p12 --export-secret-key-p12 0xXXXXXXXX

You have to extract Key and Certificates separatly:

openssl pkcs12 -in secret-gpg-key.p12 -nocerts -out gpg-key.pem
openssl pkcs12 -in secret-gpg-key.p12 -nokeys -out gpg-certs.pem

You can now use it in OpenSSL.

You can also do similar thing with GnuPG public keys. There will be only certificates output.


Invert process:

openssl pkcs12 -export -in gpg-certs.pem -inkey gpg-key.pem -out gpg-key.p12
gpgsm --import gpg-key.p12


Now, chain processes:

 gpgsm -o  secret-gpg-key.p12 --export-secret-key-p12 0xXXXXXXXX
 openssl pkcs12 -in secret-gpg-key.p12 -nocerts -out gpg-key.pem

We need to protect key, else ssh refuse it.

 chmod 600 gpg-key.pem
 cp gpg-key.pem ~/.ssh/id_rsa
 ssh-keygen -y -f gpg-key.pem > ~/.ssh/


First we need to create a certificate (self-signed) for our ssh key:

openssl req -new -x509 -key ~/.ssh/id_rsa -out ssh-cert.pem

We can now import it in GnuPG

openssl pkcs12 -export -in ssh-certs.pem -inkey ~/.ssh/id_rsa -out ssh-key.p12
gpgsm --import ssh-key.p12

Notice you cannot import/export DSA ssh keys to/from GnuPG

- page 1 de 5