|
I've got a refactoring question, I happen to be working in C++ so I thought I'd ask it here. Is there an easy way, using an IDE or script or whatever to change an operator into a funtion ie. to go through all my code and change: "a+b" to "myAddFunc(a,b)" of something like that. Ideally specifying the types of a and b. For example if a is int and b is double, and the code was previously using a static or implicit cast, and I would instead like to use my own function to handle it, do I have to go through all the code and change it manually?? Does Eclipse or Xcode or Visual Studio provide a way of doing this easily? Thanks.
|
# ? Apr 20, 2015 06:19 |
|
|
# ? May 11, 2024 15:40 |
|
It's not built into Visual Studio. Visual Assist might be able to help you, but I doubt it. C++ is a really, really hard language to do that kind of thing on because its grammar is such a nightmare.
|
# ? Apr 20, 2015 06:34 |
Does std::atomic work like I think it does? I'm not particularly good at deciphering tech talk, so the C++ reference page doesn't really tell me much, and in the Concurrent Programming course I took we only worked with Java Monitors (as much as I dislike working with Java, this is one thing I love about it.) Based on the name, my assumption is that if I have an std::atomic<int> i = 0 and have 100 separate threads do i++, it will always end up with a value of 100 (whereas with a regular int it could be anything between 1 and 100.) Is this assumption correct? E: Also related, do C++ threads work like Java tasks? As far as I can tell no matter if I pass it a function that increments a global atomic_int or a regular int, they both come out correct. When we did the same experiment in class, non-monitored functions would produce wildly varying results. E2: Am I wrong in thinking that incrementation/addition is not an inherently atomic action? Joda fucked around with this message at 16:44 on Apr 22, 2015 |
|
# ? Apr 22, 2015 12:41 |
Joda posted:Does std::atomic work like I think it does? I'm not particularly good at deciphering tech talk, so the C++ reference page doesn't really tell me much, and in the Concurrent Programming course I took we only worked with Java Monitors (as much as I dislike working with Java, this is one thing I love about it.) Based on the name, my assumption is that if I have an std::atomic<int> i = 0 and have 100 separate threads do i++, it will always end up with a value of 100 (whereas with a regular int it could be anything between 1 and 100.) Is this assumption correct? Yes, you are correct here, other than I'm not sure that you will get a number between 1 and 100 in the "regular int" case. My assumption is you will often get junk instead. Joda posted:E: Also related, do C++ threads work like Java tasks? As far as I can tell no matter if I pass it a function that increments a global atomic_int or a regular int, they both come out correct. When we did the same experiment in class, non-monitored functions would produce wildly varying results. My guess is that the way you are structuring your C++ test is hiding the results. Probably by the time the next thread is allocated, the previous thread has already incremented i. Try creating a vector of 1000 ints and having each thread increment all of them and you should see the expected weird behavior. Joda posted:E2: Am I wrong in thinking that incrementation/addition is not an inherently atomic action? No, you are correct--those operations are not inherently atomic. IIRC incrementation is actually three actions--reading a value, incrementing it, and writing the incremented value. I might be wrong on that though.
|
|
# ? Apr 22, 2015 18:13 |
|
VikingofRock posted:No, you are correct--those operations are not inherently atomic. IIRC incrementation is actually three actions--reading a value, incrementing it, and writing the incremented value. I might be wrong on that though. fetch_add should let you do atomic increment.
|
# ? Apr 22, 2015 18:15 |
|
Ice posted:I've got a refactoring question, I happen to be working in C++ so I thought I'd ask it here. You could possibly write the script yourself, using a compiler and its error messages, if the types of a, b, and the return value, are such that there's negligible risk that an incorrect replacement would compile. Maybe try a multi-step process: Make unique wrapper types for the left and right-hand arguments of +. Also make one for the return type. Then change the signature of operator+ to use these types, and update the code automatically (or mostly-automatically) with wrapper functions for the arguments and return value. Then rename it to myAddFunc, and update the code automatically (which should be real easy this time, since the lhs and rhs expressions are already wrapped in unique functions). Then remove the wrappers. (Many times it's good to do manual refactorings this way, too.) It might be easier to just do it by hand though.
|
# ? Apr 22, 2015 23:48 |
|
How does disabling RTTI "break" (some) uses of dynamic_cast without breaking dynamic dispatch in general? I understand that this is implementation-specific, but it's basically impossible to Google because every way I can think to ask is similar to some other extremely common question.
|
# ? Apr 23, 2015 04:50 |
|
You can disable RTTI and still have a vtable. Implementing RTTI just means adding extra data to the vtable aside from just function pointers.
|
# ? Apr 23, 2015 04:58 |
|
Right, I know that. I can rephrase my question as "What is preventing you from resolving dynamic_casts by hopping around vtables." Maybe I just need to find the right primer on vtable implementation; it's not something I've looked super hard into.
|
# ? Apr 23, 2015 06:01 |
|
Well a vtable is just a list of function pointers. It has no type information. The more interesting question is how do you extend the vtable to add type information for RTTI? And that's pretty highly implementation specific. e: "Inside the C++ Object Model" is a good book for this kind of thing. Paniolo fucked around with this message at 06:16 on Apr 23, 2015 |
# ? Apr 23, 2015 06:14 |
Vtables don't inherently have to point to super classes at all. They just need enough information to resolve method calls, which is a single level lookup for any class.
|
|
# ? Apr 23, 2015 06:14 |
Thanks for the answer. I reconfigured my test so that it does three actions over a loop with 1000 iterations, and am still scratching my head at the result. I defined an invariant that value x (for non-atomic) and value y (for atomic) should be 0 for an arbitrary number of threads, and expected both of them to give garbage-looking results without any modifications (since I'm doing 3 atomic actions per iteration, assuming atomic operations.) However, only the function using atomic values actually produces garbage. This is the code:C++ code:
code:
C++ code:
C++ code:
code:
|
|
# ? Apr 23, 2015 09:05 |
|
Atomic variables don't magically solve concurrency problems, which appears to be what you're expecting them to do. Specifically, nothing in your atomic example guarantees that one thread will complete an entire iteration of the loop before some other thread touches one of the variables. As for why your non-atomic example always gives you zero, I wouldn't be surprised if a clever optimiser figured out that m - n would always be zero in single-threaded code, and turned the loop into just incrementing m and n without touching x.
|
# ? Apr 23, 2015 09:51 |
Jabor posted:Atomic variables don't magically solve concurrency problems, which appears to be what you're expecting them to do. Specifically, nothing in your atomic example guarantees that one thread will complete an entire iteration of the loop before some other thread touches one of the variables. Not at all. Like I said I expected the initial result for f(). Unless what you're saying is that memory orders don't work like I think they do, and my semaphore implementation is incorrect? What I expect the semaphore-version of the f() to do is ensure that only one thread can access the loop at any one time. E: My thinking is that when I acquire a resource, no other thread can access it until the thread that acquired it has released it. Is this thinking incorrect? E2: I realize my semaphore doesn't really work, because its results are inconsistent, but I'm not sure how to protect my critical regions if I can't lock down a resource. Joda fucked around with this message at 10:03 on Apr 23, 2015 |
|
# ? Apr 23, 2015 09:55 |
|
Joda posted:Not at all. Like I said I expected the initial result for f(). Unless what you're saying is that memory orders don't work like I think they do, and my semaphore implementation is incorrect? What I expect the semaphore-version of the f() to do is ensure that only one thread can access the loop at any one time. Sorry, I just went to the code and skimmed the text. Your understanding of the original atomic stuff seems fine. Your semaphore implementation, though, seems wonky. You should take a closer look at your P() implementation, and think on what happens if your thread is interrupted (and another thread does something with the semaphore) immediately after you do the load in the loop condition.
|
# ? Apr 23, 2015 10:03 |
|
Joda posted:What baffles me the most is that I always get the right result with f2, Memory model. That function (f2, not the other one) will almost certainly (even without optimizations) read the values of m, n, x once at the beginning of the function and write them out once at the end. With optimizations, it gets even better since it won't even loop. The other function always reads and stores for every access and update, and the loop always stays. There should be a interleaving which causes f2 to give not-0, but it might actually be very hard to trigger. There could even be weird cache-(in?)coherency behavior on that part of your processor which could make it almost impossible to trigger. The only sure-fire way to see it happen would be for one thread, on one core to be interrupted in the middle of reading the two values of m and n, and to switch to another thread (on the same core) which updates them. This is a window of 1 cycle (because m and n are likely on the same cache line, as soon as the processor loads the value for one, it has the other, too.) These threads are so simple and short to run, they probably are never interrupted at all. Every one just runs to completion when it gets cpu time. So you probably can't induce a bad interleaving without making them more complex.
|
# ? Apr 23, 2015 17:46 |
|
E: beat
Sex Bumbo fucked around with this message at 17:59 on Apr 23, 2015 |
# ? Apr 23, 2015 17:52 |
|
Sex Bumbo posted:E: beat I think you should put your post back. It's more concrete than my high-level waving around. And it at least corrected my claim that it'd do loads and stores only once in the whole function without optimizations. Evidently (if I recall correctly, you removed the assembly dump, so I can't double check!), it was doing the loads at the start of the loop and store at the end of the loop (without optimizations, right?)
|
# ? Apr 23, 2015 18:13 |
|
Actually it wasn't, and I was totally wrong. I looked at it some more and if you change code:
code:
code:
E: My hypothesis is that a 1000 iteration loop will complete faster than the next thread can begin working. I don't know poo poo about operating systems but I have some data to support this. I took some concurrency captures from VS. This is the view of f2 with 1000 iterations, no explicit sleeps: Notice that there's almost no overlap between the worker threads. Now, here's f with 1000 iterations. My hypothesis is that using atomics makes the thread execution much longer so there's more chance for overlap. And there certainly is overlap, which might explain why the results are "incorrect" on the atomic version. I previously thought putting in a sleep would vaguely synchronize the threads as a sort of side effect of the operating system. Sort of like a faux-signal. Here's the capture from the sleep version: This seems to support my idea -- notice the group of four threads that start at exactly the same time, something that doesn't happen without the sleep. E2: sorry for boring everyone with bs but if you're interested an optimized build will exhibit the aforementioned issue with working on register variables (and will always return 0 even if the threads are explicitly synchronized). Specifying m and n as "volatile int" will reintroduce the problems if the threads manage to overlap because it forces the compiler to operate on the actual addresses of the variables instead of registers. Sex Bumbo fucked around with this message at 19:21 on Apr 23, 2015 |
# ? Apr 23, 2015 18:25 |
Sex Bumbo posted:Effort post This is great, thanks! Also provides some motivation to actually give a poo poo about cache level optimisations. I definitely see the fake synchronisation offered by sleeping the threads, without sleeps, wrong results get even worse. Jabor posted:Sorry, I just went to the code and skimmed the text. Your understanding of the original atomic stuff seems fine. Do you mean the possibility of a deadlock? I hadn't considered that, so it's definitely going on my to-do, but right now I just want to be able to synchronize threads that I know won't be interrupted. I tried changing the semaphore to use a mutex, since that is apparently the standard for protecting critical regions, but I'm still getting garbage results. I'm sure I'm missing something obvious, since I tried just using a global mutex to protect the loop itself in f(), and I still get wrong results: Semaphore: C++ code:
C++ code:
code:
E: Maybe if two threads are waiting at mut.lock() and the mutex is unlocked, both threads are released? If this is the case, how would you go about avoid this? Joda fucked around with this message at 22:29 on Apr 23, 2015 |
|
# ? Apr 23, 2015 21:49 |
|
Using mutex::lock/unlock makes both functions always return 0 for me. C++ code:
|
# ? Apr 23, 2015 22:31 |
No, the result I posted is from a run with mutex.lock()/unlock(). E: If it matters, it's compiled with g++ 4.8.1 with argument -std=c++11 and nothing else. Target is Linux x86_64. Joda fucked around with this message at 22:44 on Apr 23, 2015 |
|
# ? Apr 23, 2015 22:38 |
|
Joda posted:No, the result I posted is from a run with mutex.lock()/unlock(). Try adding -lpthread In my experience programs using POSIX locking primitives often build just fine without that addition, but the locks will do nothing.
|
# ? Apr 24, 2015 03:59 |
C++ code:
code:
Still, something that throws an error is preferable to having undefined behaviour, so I'll take it,. thanks EDIT: I tried adding -pthread (in stead of -lpthread) and now std::mutex works just fine. Isn't pthread an OS specific thing, and what is the difference between -pthread and -lpthread? I thought the l just stood for lib? Sometimes GCC completely baffles me Still, thanks a tonne for the help! Joda fucked around with this message at 09:09 on Apr 24, 2015 |
|
# ? Apr 24, 2015 08:10 |
|
Joda posted:EDIT: I tried adding -pthread (in stead of -lpthread) and now std::mutex works just fine. Isn't pthread an OS specific thing, and what is the difference between -pthread and -lpthread? I thought the l just stood for lib? Sometimes GCC completely baffles me Still, thanks a tonne for the help! All of the c++11 thread libraries are ultimately built on top of the system libraries so std::mutex is basically just a standardized wrapper around pthread_mutex on Linux. -lpthread tells the compiler to link the program against libpthread. -pthread does the same but also causes the compiler to define some macros like _REENTRANT which tells the C runtime to use the thread-safe versions of various functions.
|
# ? Apr 24, 2015 15:21 |
|
The_Franz posted:Try adding -lpthread Why does this not give some sort of error somewhere? What does compiler end up generating in this case? Joda posted:Although, as soon as I spawn more than 10.000 threads it starts giving me a system error about a resource being temporarily available. Doing 10000 micro tasks by using 10000 threads isn't a good idea because of the resources used. If you switch all the threads to futures, you'll get the same functionality but only use a limited number of threads. Here's the thread activity in green. The top worker thread seems unrelated -- what's important is that the 8 worker threads are all solid, which map to the 8 cores on my computer. There aren't any more threads than this, despite thousands of calls to f/f2. Also note that there aren't those little blue chunks like in my previous post, which I think was some sort of OS overhead, meaning this version should ideally be doing a lot more concurrency -- the original problem of f2 always returning 0 is not exhibited because of all the race conditions the futures create. E: putting locks around f/f2 will create more threads, but much less than 10000. Sex Bumbo fucked around with this message at 19:12 on Apr 24, 2015 |
# ? Apr 24, 2015 18:50 |
|
Sex Bumbo posted:Why does this not give some sort of error somewhere?
|
# ? Apr 24, 2015 20:00 |
|
The_Franz posted:Try adding -lpthread -pthread is the correct flag to use: -lpthread links against the pthread library, -pthread does that while also defining a bunch of macros The_Franz posted:All of the c++11 thread libraries are ultimately built on top of the system libraries so std::mutex is basically just a standardized wrapper around pthread_mutex on Linux. oops apparently I'm blind
|
# ? Apr 25, 2015 01:30 |
|
I'm looking for a little help on a data structures assignment using chained hashing that I'm having trouble getting started. I've posted the header file below and I'm confused about line 51 specifically (stNode** buckets). All the examples I've seen of chained hashing use a 1D array and a linked list to chain nodes together when a collision occurs. (stNode** buckets) seems like my professor wants us to allocate a 2D array for chained hashing, how would this work? Or am I wrong here? code:
|
# ? Apr 26, 2015 19:35 |
|
This isn't intended to be an two-dimensional array but rather a dynamic array of pointers, as the comment suggests:code:
C++ code:
Evil_Greven fucked around with this message at 19:57 on Apr 26, 2015 |
# ? Apr 26, 2015 19:53 |
|
Evil_Greven posted:This isn't intended to be an two-dimensional array but rather a dynamic array of pointers, as the comment suggests: Thanks for pointing me in the right direction. Why would you implement it this way vs a standard 1D array?
|
# ? Apr 26, 2015 20:09 |
|
Diametunim posted:Thanks for pointing me in the right direction. Why would you implement it this way vs a standard 1D array? If you're wondering why you can't do a statically allocated array, my assumption is that because you have two bucket count variables (an initial and current) you'll have to resize and rehash your table based on the load factor. You'll want to use dynamic allocation since you're not dealing with a compile-time constant for the array size. Edit: the copy constructor and assignment operator being disallowed gives me a feeling. I hope your instructor has you implement them at some point, since they're pretty critical to get right in a C++-based data structures course. They're also using a pre-C++11 method to disallow it (private member) as opposed to declaring it as = delete. That's not important but maybe you find that interesting Star War Sex Parrot fucked around with this message at 20:47 on Apr 26, 2015 |
# ? Apr 26, 2015 20:19 |
|
Star War Sex Parrot posted:What do you mean by "standard"? The type used to keep track of a dynamically allocated array is a pointer, and since the type being stored in the array is also pointers, you end up with a declaration of stNode** for your buckets array. When I say "standard" I mean what I'm used to seeing in the tutorials I've been following. My professor only goes over concepts conceptually and rarely discusses the code behind our assignments with us. I suppose, I should get better at problem solving on my own then, since without a tutorial or some guidance I end up being pretty lost. Either way, the "standard" I'm used to following is something like this: code:
code:
e: I'm not sure what the rule on double posting is so I'll just edit this post instead. I finished my HashTable assignment but I'm having trouble with a few of the entries and I'm not quite sure why the other ~40 or so entries work just fine. I'd also like some pointers on how I could optimize / clean my code (StrTab.cpp) up if anyone has any. Here's a gist of my assignment so I don't clutter the thread anymore. Diametunim fucked around with this message at 19:36 on Apr 30, 2015 |
# ? Apr 26, 2015 23:26 |
|
Probably a basic question, but for some reason I can't get my program to print out C-Strings. I use C-Strings to allow for reading and writing of structs from and to a file as one continuous unit. The relevant code snippets: code:
America Inc. fucked around with this message at 10:30 on May 2, 2015 |
# ? May 2, 2015 10:25 |
|
LookingGodIntheEye posted:Probably a basic question, but for some reason I can't get my program to print out C-Strings. I use C-Strings to allow for reading and writing of structs from and to a file as one continuous unit. Why exactly can't you use STL strings? It seems that those would work.
|
# ? May 2, 2015 12:14 |
|
hooah posted:Why exactly can't you use STL strings? It seems that those would work. STL strings don't do that, you have to make a serialize function for your structs and stuff. Not that it's not legitimate to say "write this a less atrocious way", but you're asking the one "why did you do that" question that's already answered. This would be a good time to learn some debugging techniques, I'd say. But the most direct answer is "strcmp doesn't work that way, so you're not reading any records." (With any kind of debugging, either breakpoint/step debugging or sticking print statements everywhere to see what path you're taking, you'd [probably] discover that the code in the loop is never run.)
|
# ? May 2, 2015 14:59 |
|
I don't understand what, in the code, can't be done with STL strings. I guess I don't get the meaning of "allow for reading and writing of structs from and to a file as one continuous unit".
|
# ? May 2, 2015 15:10 |
|
hooah posted:I don't understand what, in the code, can't be done with STL strings. I guess I don't get the meaning of "allow for reading and writing of structs from and to a file as one continuous unit". He wants to read and write the struct with one call to inFile.read (and presumably a corresponding outFile.write somewhere else). If the struct had stl strings in it he'd need to read and write the contents of each string manually because they wouldn't all live in a single contiguous block of memory like they do now.
|
# ? May 2, 2015 15:15 |
|
LookingGodIntheEye posted:Probably a basic question, but for some reason I can't get my program to print out C-Strings. I use C-Strings to allow for reading and writing of structs from and to a file as one continuous unit. This loop is completely wrong. 1. You don't fill or zero the account struct before the first comparison so it will contain uninitialized data. 2. strcmp returns 0 when a match has been found and a non-zero value otherwise so your loop will exit on the first mismatched record. 3. !inFile.eof() will exit the loop if you are NOT at the end of the file. 4. The eof check should be before the read otherwise you will exit with a failure after the last record is read but before the string comparisons. What you want here is a do-while loop so that you populate the account struct before the comparisons occur. Move the eof check before the read so that you don't exit while you have a valid record to test against.
|
# ? May 2, 2015 16:57 |
|
|
# ? May 11, 2024 15:40 |
|
LookingGodIntheEye posted:I use C-Strings to allow for reading and writing of structs from and to a file as one continuous unit.
|
# ? May 2, 2015 18:59 |