Problem with gcc4.7 and call_once

Chris Jones jonesc at hep.phy.cam.ac.uk
Thu Aug 8 08:14:24 PDT 2013


On 08/08/13 16:04, David Barto wrote:
> Interesting result of the build of libstdcxx
>
> gcc 4.8 rebuilt because I removed the /opt/local/lib/libstdc++.6.dylib.

you deleted it by hand ? Not rally a good idea, as its a file managed by 
a port, and things tend to go wrong when you start deletings such things 
by hand.

I would manually remove your gccXY and libstdcxx ports, then try again 
installing them properly via port.

>
> The resulting libstdc++.6.dylib built by gcc4.8 was:
>
> 522_ ls /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_gcc48/gcc48/work/destroot/opt/local/lib/gcc48/libstdc++.*
>   2864 -rwxr-xr-x  1 root  wheel  1463728 Aug  8 07:49 /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_gcc48/gcc48/work/destroot/opt/local/lib/gcc48/libstdc++.6.dylib
>
> Fine.
>
> When complete however /opt/local/lib/gcc48 has a symbolic link to /opt/local/lib/libstdc++.6.dylib
>
> which is NOT the version just built. In fact since the file ' /opt/local/lib/libstdc++.6.dylib' doesn't exist, the port package started the build over again. This seems reasonable.
>
> However since the file that gets built is not installed, this appears to be the problem related to the std::call_once issue.
>
> Any help on getting the port to install the proper file?
>
> 	David
>
>
> On Aug 7, 2013, at 10:43 AM, Jeremy Huddleston Sequoia <jeremyhu at macports.org> wrote:
>
>> Did it work when you were using libstdcxx-4.7?
>>
>> __once_proxy is just looking up some other function (__once_call) using __emutls_get_address and executing it (makes sense based on the name).  __emutls_get_address is returning 3 in this instance, so something looks wrong with emutls:
>>
>> (lldb) disassemble -n __once_proxy
>> libstdc++.6.dylib`__once_proxy:
>>    0x1000e974e:  pushq  %rbp
>>    0x1000e974f:  movq   %rsp, %rbp
>>    0x1000e9752:  leaq   463719(%rip), %rdi        ; __emutls_v._ZSt11 _ZSt11__once_call
>>    0x1000e9759:  callq  0x100101880               ; libstdc++.6.dylib.__TEXT.__text + 602364
>> -> 0x1000e975e:  movq   (%rax), %rax
>>    0x1000e9761:  callq  *%rax
>>    0x1000e9763:  popq   %rbp
>>    0x1000e9764:  ret
>> (lldb) disassemble -s 0x100101880
>> libstdc++.6.dylib`__emutls_get_address:
>> ...
>> (lldb) register read
>> General Purpose Registers:
>>        rax = 0x0000000000000003
>> ...
>>
>>
>>
>>
>>
>> On Aug 7, 2013, at 9:35, Brian D. McGrew <brian at visionpro.com> wrote:
>>
>>> Same here with gcc-4.7 and gcc-4.8
>>>
>>> Program received signal EXC_BAD_ACCESS, Could not access memory.
>>> Reason: 13 at address: 0x0000000000000000
>>> [Switching to process 10581 thread 0x1203]
>>> 0x00000001000c6b20 in __once_proxy ()
>>> ___________________________________________________________________________
>>> ____
>>> Error while running hook_stop:
>>> Value can't be converted to integer.
>>> (gdb) where
>>> #0  0x00000001000c6b20 in __once_proxy ()
>>> #1  0x00007fff8b36eff0 in pthread_once ()
>>> #2  0x0000000100001195 in ?? ()
>>> (gdb)
>>>
>>>
>>> -brian
>>> --
>>>
>>>
>>> Brian McGrew
>>> brian at visionpro.com
>>>
>>>
>>>
>>>
>>> On 8/7/13 9:22 AM, "David Barto" <DBarto at visionpro.com> wrote:
>>>
>>>> Same results with gcc 4.8 +universal
>>>>
>>>> 649_ rm threading ; make threading
>>>> /opt/local/bin/g++-mp-4.8 -std=c++11 -g threading.cpp -o threading
>>>> 650_ ./threading
>>>> Segmentation fault: 11
>>>>
>>>> Though I have not made any changes to libstdc++, just updated to the last
>>>> version from the port on Monday.
>>>>
>>>> 	David
>>>>
>>>>
>>>> On Aug 7, 2013, at 9:09 AM, David E Barto <dbarto at visionpro.com> wrote:
>>>>
>>>>>
>>>>> On Aug 7, 2013, at 8:44 AM, Jeremy Huddleston Sequoia
>>>>> <jeremyhu at apple.com> wrote:
>>>>>
>>>>>> Can you provide a reproducible test case?
>>>>>>
>>>>>
>>>>> Compile line is:
>>>>> /opt/local/bin/g++-mp-4.7 -std=c++11 -g threading.cpp -o threading
>>>>>
>>>>>
>>>>> The following is the result of the execution of the code.
>>>>>
>>>>> Program received signal EXC_BAD_ACCESS, Could not access memory.
>>>>> Reason: 13 at address: 0x0000000000000000
>>>>> [Switching to process 36254 thread 0x1203]
>>>>> 0x00000001000d1b20 in __once_proxy ()
>>>>> (gdb)
>>>>>
>>>>> The code follows.
>>>>> With the exception of the changes for GCC 4.7 and a 'main' at the end
>>>>> this is the thread library as posted at:
>>>>> 	https://github.com/progschj/ThreadPool
>>>>> I'm using the example code that is specified on the github as the
>>>>> example main here.
>>>>>
>>>>> threading.cpp
>>>>>
>>>>> #include <vector>
>>>>> #include <queue>
>>>>> #include <memory>
>>>>> #include <thread>
>>>>> #include <mutex>
>>>>> #include <condition_variable>
>>>>> #include <future>
>>>>> #include <functional>
>>>>> #include <stdexcept>
>>>>>
>>>>> typedef std::thread worker_t;
>>>>>
>>>>> class ThreadPool {
>>>>> public:
>>>>>   ThreadPool(size_t threads);
>>>>> #if (__GNUC__ <= 4) || (__GNUC_MINOR__ < 8)
>>>>>   //
>>>>>   // By default thread pools run at a lower priority
>>>>>   //
>>>>>   template<class T, class F, class... Args>
>>>>>   std::future<T> enqueue(F&& f, Args&&... args);
>>>>> #else
>>>>>   template<class F, class... Args>
>>>>>   auto enqueue(F&& f, Args&&... args)
>>>>>       -> std::future<typename std::result_of<F(Args...)>::type>;
>>>>> #endif
>>>>>   ~ThreadPool();
>>>>> private:
>>>>>   // need to keep track of threads so we can join them
>>>>>   std::vector< worker_t > workers;
>>>>>   // the task queue
>>>>>   std::queue< std::function<void()> > tasks;
>>>>>
>>>>>   // synchronization
>>>>>   std::mutex queue_mutex;
>>>>>   std::condition_variable condition;
>>>>>   bool stop;
>>>>> };
>>>>>
>>>>> // the constructor just launches some amount of workers
>>>>> inline ThreadPool::ThreadPool(size_t threads) : stop(false)
>>>>> {
>>>>>   for(size_t i = 0;i<threads;++i)
>>>>>   {
>>>>>       workers.emplace_back(
>>>>>           [this]
>>>>>           {
>>>>>               while(true)
>>>>>               {
>>>>>                   std::unique_lock<std::mutex> lock(this->queue_mutex);
>>>>>                   while(!this->stop && this->tasks.empty())
>>>>>                       this->condition.wait(lock);
>>>>>                   if(this->stop && this->tasks.empty())
>>>>>                       return;
>>>>>                   std::function<void()> task(this->tasks.front());
>>>>>                   this->tasks.pop();
>>>>>                   lock.unlock();
>>>>>                   task();
>>>>>               }
>>>>>           }
>>>>>       );
>>>>>   }
>>>>> }
>>>>>
>>>>> #if (__GNUC__ <= 4) || (__GNUC_MINOR__ < 8)
>>>>> template<class T, class F, class... Args>
>>>>> // coverity[pass_by_value]
>>>>> inline std::future<T>
>>>>> ThreadPool::enqueue(F&& f, Args&&... args)
>>>>> {
>>>>>   //typedef typename std::result_of<F(Args...)>::type return_type;
>>>>>
>>>>>   // don't allow enqueueing after stopping the pool
>>>>>   if(stop)
>>>>>       throw std::runtime_error("enqueue on stopped ThreadPool");
>>>>>
>>>>>   auto task = std::make_shared< std::packaged_task<T()> >(
>>>>>           std::bind(std::forward<F>(f), std::forward<Args>(args)...)
>>>>>       );
>>>>>
>>>>>   std::future<T> res = task->get_future();
>>>>>   {
>>>>>       std::unique_lock<std::mutex> lock(queue_mutex);
>>>>>       tasks.push([task](){ (*task)(); });
>>>>>   }
>>>>>   condition.notify_one();
>>>>>   return res;
>>>>> }
>>>>>
>>>>> #else
>>>>> // add new work item to the pool
>>>>> template<class F, class... Args>
>>>>> auto ThreadPool::enqueue(F&& f, Args&&... args)
>>>>>   -> std::future<typename std::result_of<F(Args...)>::type>
>>>>> {
>>>>>   typedef typename std::result_of<F(Args...)>::type return_type;
>>>>>
>>>>>   // don't allow enqueueing after stopping the pool
>>>>>   if(stop)
>>>>>       throw std::runtime_error("enqueue on stopped ThreadPool");
>>>>>
>>>>>   auto task = std::make_shared< std::packaged_task<return_type()> >(
>>>>>           std::bind(std::forward<F>(f), std::forward<Args>(args)...)
>>>>>       );
>>>>>
>>>>>   std::future<return_type> res = task->get_future();
>>>>>   {
>>>>>       std::unique_lock<std::mutex> lock(queue_mutex);
>>>>>       tasks.push([task](){ (*task)(); });
>>>>>   }
>>>>>   condition.notify_one();
>>>>>   return res;
>>>>> }
>>>>> #endif
>>>>>
>>>>> // the destructor joins all threads
>>>>> inline ThreadPool::~ThreadPool()
>>>>> {
>>>>>   {
>>>>>       std::unique_lock<std::mutex> lock(queue_mutex);
>>>>>       stop = true;
>>>>>   }
>>>>>   condition.notify_all();
>>>>>   for(size_t i = 0;i<workers.size();++i)
>>>>>   {
>>>>>       workers[i].join();
>>>>>   }
>>>>> }
>>>>>
>>>>> #include <iostream>
>>>>>
>>>>> int
>>>>> main(int argc, char *argv[])
>>>>> {
>>>>>   // create thread pool with 4 worker threads
>>>>>   ThreadPool pool(4);
>>>>>
>>>>>   // enqueue and store future
>>>>>   auto result = pool.enqueue<int>([](int answer) { return answer; },
>>>>> 42);
>>>>>
>>>>>   // get result from future
>>>>>   std::cout << result.get() << std::endl;
>>>>>
>>>>> }
>>>>>
>>>>
>>>> _______________________________________________
>>>> macports-users mailing list
>>>> macports-users at lists.macosforge.org
>>>> https://lists.macosforge.org/mailman/listinfo/macports-users
>>>
>>> _______________________________________________
>>> macports-users mailing list
>>> macports-users at lists.macosforge.org
>>> https://lists.macosforge.org/mailman/listinfo/macports-users
>>
>
> _______________________________________________
> macports-users mailing list
> macports-users at lists.macosforge.org
> https://lists.macosforge.org/mailman/listinfo/macports-users
>



More information about the macports-users mailing list