gcc precompiled headers weird behaviour with -c option

March 28th, 2010

Short story

Use -fpch-preprocess option alongside with -c option in order to make precompiled headers work properly.

Long story

I’m using gcc-4.4.1 on Linux and before trying precompiled headers in a really large project I decided to test them on a simple program. They “kinda work” but I was not happy with results and I was sure there was something wrong about my setup.

First of all, I wrote the following program(main.cpp) to test if they worked at all:

    #include <boost/bind.hpp>
    #include <boost/function.hpp>
    #include <boost/type_traits.hpp>
 
    int main()
    {
      return 0;
    }

Then I created the precompiled headers file pre.h(in the same directory) as follows:

    #include <boost/bind.hpp>
    #include <boost/function.hpp>
    #include <boost/type_traits.hpp>

…and compiled it:

    $ g++ -I. pre.h

(pre.h.gch was created)

After that I measured compile time with and without precompiled headers:

with pch

    $ time g++ -I. -include pre.h main.cpp

    real	0m0.128s
    user	0m0.088s
    sys	 0m0.048s

without pch

    $ time g++ -I. main.cpp 

    real	0m0.838s
    user	0m0.784s
    sys	 0m0.056s

So far so good! **Almost 7 times faster, that’s impressive!** Then I tried something more realistic. Since all my sources are built with -c option I tried using it and for some reason I couldn’t make pch play nicely with it. Here is what I did…

I created the test module foo.cpp as follows:

    #include <boost/bind.hpp>
    #include <boost/function.hpp>
    #include <boost/type_traits.hpp>
 
    int whatever()
    {
      return 0;
    }

Here are the timings of my attempts to build the module foo.cpp with and without pch:

with pch

    $ time g++ -I. -include pre.h -c foo.cpp 

    real	0m0.357s
    user	0m0.348s
    sys	0m0.012s

without pch

    $ time g++ -I. -c foo.cpp 

    real	0m0.330s
    user	0m0.292s
    sys	0m0.044s

That was quite strange, looked like there was no speedup at all!(and, yes, I ran timings for several times). It turned out precompiled headers were not used at all in this case, I checked it with -H option(output of g++ -I. -include pre.h -c foo.cpp -H didn’t list pre.h.gch).

I was pretty much in despair looking at gcc’s man page and trying misc. options…until I stumbled upon -fpch-preprocess. Hell, why not try using it and it worked well all of a sudden :) Here is the timings:

$ time g++ -I. -include pre.h -c foo.cpp -fpch-preprocess

real	0m0.028s
user	0m0.016s
sys	0m0.016s

I pretty vaguely understand why it worked so you if you do, please explain it in comments, I’d be very grateful for that.

Mercurial file conflicts resolution similar to Subversion behavior

January 8th, 2010

For the impatient

Ok, here is what you have to do on your Ubuntu box:

$ sudo apt-get install rcs
$ vim ~/.hgrc

… and put the following lines somewhere into your ~/.hgrc

[merge-tools]
merge.priority = 100

The full story

As an active user of the Mercurial version control system I’ve been struggling for quite some time with its merging facilities. Actually all wanted was Subversion alike behavior: once the conflict happens special text markers are placed into the conflicting source files and one has to resolve these conflicts manually using one’s favorite text editor(vim in my case).

There is a terse mention about the required behavior in the official Mercurial book(it boils down to setting HGMERGE env. variable as follows: $ export HGMERGE=merge). However the Mercurial book recommends using one of the GUI merge tool instead(kdiff3, meld, etc) and I simply can’t stand GUI stuff for this purpose. I recall one day I had to merge branches using ssh, now how would I do it with GUI?

I tried setting HGMERGE env. variable but it didn’t work since merge wasn’t actually installed on my Ubuntu box but for some reason I didn’t get it(probably because of some obscure error message) and I simply started using a GUI merge tool… I felt really unhappy and had to live with it for quite some time until one day when a colleague of mine told me to install the rcs package which actually contained the merge tool!

It also makes sense to use [merge-tools] section of the .hgrc configuration file instead of HGMERGE env. variable but, of course, YMMV.

Update: the same trick is possible on Windows as well! All you have to do is to install GNU rcs binaries into your Path. However, there is a problem with diff3 binary bundled with GNU rcs - it crashes. You have to install diff3 from somewhere else, for example from nixtools. In order to save your time I’m attaching a zip file containing rcs with working diff3.GNU rcs(with working diff3)

glibc-2.7 makecontext issues on x86_64

December 28th, 2009

Looks like passing 64 bit values(e.g pointers) into makecontext is not working properly on x86_64. We are using makecontext for coroutines implementation and its proper working is vital for us.

I’ve been struggling with this bug for a couple of days and have finally found a solution for it. Actually the solution is trivial it was the actual process of spotting this bug which took so much time. By the way, this issue was resolved in the latest releases of glibc, so nothing to worry about if you are using gcc older than 4.2.4.

Ok, here is the solution - pass your 64 bit pointer as two ints :) Here it comes:

#include <stdlib.h>
#include <ucontext.h>
 
struct Foo{};
 
#if defined(__GNUC__) && defined(__x86_64__) && __GNUC__ < 5 && __GNUC_MINOR__ < 3
void thread(__uint32_t p1, __uint32_t p2)
{
  Foo* foo = (Foo*)((__uint64_t)p2 | ((__uint64_t)p1) << 32);
#else
void thread(Foo* foo)
{
#endif
  ...
}
 
#define FIBER_STACK 1024*64
 
ucontext_t child
 
int main()
{
  getcontext (&child);
 
  // Modify the context to a new stack
  child.uc_link = 0;
  child.uc_stack.ss_sp = malloc (FIBER_STACK);
  child.uc_stack.ss_size = FIBER_STACK;
  child.uc_stack.ss_flags = 0;
 
   Foo foo;
 
#if defined(__GNUC__) && defined(__x86_64__) && __GNUC__ < 5 && __GNUC_MINOR__ < 3
  __uint32_t p1,p2;
  p1 = (__uint32_t)((0x00000000FFFFFFFF) & ((__uint64_t)&foo) >> 32);
  p2 = (__uint32_t)(0x00000000FFFFFFFF & (__uint64_t)&foo);
 
  makecontext (&child, (void (*)())&thread, 2, p1, p2);
#else
  makecontext (&child, (void (*)())&thread, 1, &foo);
#endif
}

Redirect build errors into vim

December 10th, 2009

Here is a small bash function which wraps the executed command in the shell and redirects all build errors right into vim. In vim you can jump between errors using standard :cn,:cp commands(as well as view them all using :cope).

function vimize () 
{ 
  local file=/tmp/vimize.errors
  if [ "$1" != "" ] ; then
    rm $file 2> /dev/null
    $1 "$2" "$3" "$4" "$5" "$6" "$7" "$8" "$9" 2>&1 | tee $file
  fi
  grep ': error:' $file 
  if [ "$?" == "0" ] ; then
    vim -q $file -c :copen
  fi
}

Just put it into your ~/.bashrc, reload the shell and ejoy it. It can be used as follows:

$ vimize ./run_some_build_script

Using boost::threadpool with boost::future

December 10th, 2009

Here is a small c++ function which allows you to submit async jobs into the thread pool and track their execution status using the conception of futures:

#include "boost/threadpool.hpp"
#include "boost/future.hpp"
#include "boost/utility/result_of.hpp"
#include "boost/shared_ptr.hpp"
 
template<typename Thp, typename Func>
boost::shared_future< typename boost::result_of<Func()>::type >
submit_job(Thp& thp, Func f)
{
  typedef typename boost::result_of<Func()>::type result;
  typedef boost::packaged_task<result> packaged_task;
  typedef boost::shared_ptr<boost::packaged_task<result> > packaged_task_ptr;
 
  packaged_task_ptr task(new packaged_task(f));
  boost::shared_future<result> res(task->get_future());
  boost::threadpool::schedule(thp, boost::bind(&packaged_task::operator(), task));
 
  return res;
}

Here is a small example of its possible usage:

User lookup_user_in_database(int id) { ... }
...
int main()
{
  boost::threadpool::pool thp(3);//number of threads
  boost::shared_future<User> future = 
    submit_job(boost::bind(lookup_user_in_database, 10));
  while(!future.is_ready())
  { 
  //do something useful
  }
  User = future.get();
  ...
}

boost::futures are now finally officially shipped with boost starting with 1.41 release. boost::threadpool is not yet an official boost library, however you can find it here.

Much kudos to authors of these amazing libraries!

dctl Xtra 0.1 released

September 14th, 2009

Hi folks!

We at my company created a simple Xtra(called dctl which means Director ConTroL) which allows us to control the Adobe Director via the network socket. The idea is very simple: the Xtra listens on some port using Multiuser facility and allows to run a set of predefined tasks(which can be very easily extended, since these tasks are written in Lingo) or eval an arbitrary Lingo string.

The client which can access dctl is written in PHP and can be used from the shell, e.g:

c:\dctl STOP
c:\dctl PLAY
c:\dctl EVAL "put 'hello'"

As I said, tasks are written in Lingo, e.g:

on task_STOP me, args
 dispatchCommand(8706)
end
------------------------------------------
on task_PLAY me, args
 dispatchCommand(8705)
end
------------------------------------------
on task_SAVE me, args
 dispatchCommand(4101)
end
------------------------------------------
on task_EVAL me, args
 if args.count < 1 then return err("not enough arguments")
 repeat with i=1 to args.count
  str = args[i]
  do str
 end repeat
end

The first version 0.1 is available at
http://code.google.com/p/director-dctl/downloads/list (don’t forget to have a look at INSTALL file)

dctl is completely free and the source is available at http://code.google.com/p/director-dctl.

P.S. dctl is a part of our Director pipeline which also includes “dcc” - a command line tool for processing JavaScript and Lingo sources, which we really hope to make public someday as well

Zveriki is the prize-winner of “The best online game” KRI2009-Award

May 19th, 2009

kri2009Hurray! Our project Zveriki got “The best online game” award at KRI-2009(analogue of GDC in Russia) :) The game is still in active development and we provided demo access only to the press and members of the jury. Earlier we hoped the game would be launched in November 2008, however the World Crisis changed our plans quite a bit and now we are aiming at the mid of summer 2009.

Best git-svn practices?

February 21st, 2009

I wonder if there are any “best git-svn practices”? Particularly for the following scenario:

1) There is a common svn repository
2) There are several developers who track/sync this common svn repo with their own local git repos using git-svn bridge(via git svn rebase/dcommit)
3) From time to time these developers using git need to share their changes without affecting the svn repository. For this purpose they setup a shared git repo and exchange their work using pull/push commands
4) It turns out these developers may face conflict problems due to usage of “git svn rebase” for syncing with the main svn repo. This happens because rebase operation rewrites history of the local git branch and it becomes impossible to push into the shared git repo and pulling from it often leads to conflicts.

Anybody having the same problem?

Update: Here is what official git-svn man page says about this problem:

For the sake of simplicity and interoperating with a less-capable system (SVN), it is recommended that all git-svn users clone, fetch and dcommit directly from the SVN server, and avoid all git-clone/pull/merge/push operations between git repositories and branches. The recommended method of exchanging code between git branches and users is git-format-patch and git-am, or just ‘dcommit’ing to the SVN repository.

Running git-merge or git-pull is NOT recommended on a branch you plan to dcommit from. Subversion does not represent merges in any reasonable or useful fashion; so users using Subversion cannot see any merges you’ve made. Furthermore, if you merge or pull from a git branch that is a mirror of an SVN branch, dcommit may commit to the wrong branch.

Looks like my whole svn workflow should be revised :(

Update: All my problems were resolved by migrating the whole repository to Mercurial :) Why Mercurial? That’s simple - it has almost all git features plus very smooth Windows support which is very important for many members of my team.

Processing results of vimgrep in vim

February 19th, 2009

What I’ve been really missing in vim is a general mechanism of applying any arbitrary processing to the results of :vim[grep] command. What I usually did was to record a macro and apply it manually(using :cp) to every entry in a quickfix window - believe me, that’s very boring. Big thanks goes to Ben Schmidt who showed me a couple of vim script commands(in the official vim mailing list) which make it possible to automate this dull process. Here they are(put them into your ~/.vimrc):

:com! -nargs=1 Qfdo try | sil cfirst |
\ while 1 | exec &lt;q-args&gt; | sil cn | endwhile |
\ catch /^Vim\%((\a\+)\)\=:E\%(553\|42\):/ |
\ endtry
<br/>
:com! -nargs=1 Qfdofile try | sil cfirst |
\ while 1 | exec &lt;q-args&gt; | sil cnf | endwhile |
\ catch /^Vim\%((\a\+)\)\=:E\%(553\|42\):/ |
\ endtry

It’s dead simple to use them. For example, you have a macro @q which makes some changes in a single line and you want to apply it to every line found by :vim command. Here’s a possible sequence of vim commands:

"search for foo string in all .cpp sources recursively
:vim /foo/ **/*.cpp     
"apply q macro to all found lines
:Qfdo normal @q

Qfdofile command is a bit different to Qfdo - it applies your command not every line but to every file found by :vim search.

Announcing zveriki.com

August 25th, 2008

zveriki100×100.pngAs you may have noticed I haven’t been around for quite some time and here is the reason for that - I have been totally occupied with my company’s new project zveriki.com, an online multiplayer browser game simulating pets life. The project is not yet complete, the release is scheduled for November 2008, still you can view some demo videos and screenshots.