|
I can't even tell if the question is "is it a compiler bug to compile this" or "is it a compiler bug to fail to compile this".
|
# ¿ Apr 5, 2023 02:12 |
|
|
# ¿ May 12, 2024 17:05 |
|
Subjunctive posted:I’ll take either! I’m not picky.
|
# ¿ Apr 5, 2023 02:18 |
|
Does that mean struct Foo::X should be resolving to the enum that has shadowed the struct? In this context is it even possible to specify struct Foo::struct X?
|
# ¿ Apr 5, 2023 02:52 |
|
Subjunctive posted:No, AIUI `struct Foo:X` is an explicit tag name lookup which should ignore the enum.
|
# ¿ Apr 5, 2023 02:57 |
|
Xarn posted:I wish to subscribe to the "rjmccall explains C++" service.
|
# ¿ Apr 5, 2023 13:13 |
|
pseudorandom name posted:C++ turned away from the face of god when it stepped beyond C With Classes.
|
# ¿ Apr 7, 2023 14:11 |
|
Volte posted:I have some bad news about smart pointers
|
# ¿ Apr 8, 2023 00:34 |
|
Incidental thing, absl::AnyInvocable is great and it's aggravating that there's not something like this in stdlib until C++21 or something. Finally I can docode:
code:
|
# ¿ Apr 11, 2023 23:50 |
|
Volguus posted:No complains. No warnings. No ... nothing. At least an "hey moron, are you sure you don't want to publicly inherit that? It's kinda useless otherwise." would have been appreciated. And compiled with Wall, pendantic, everything and cherry-on-top. Clang was happy as a clam.
|
# ¿ Apr 14, 2023 11:36 |
|
Volguus posted:It should have been
|
# ¿ Apr 15, 2023 00:15 |
|
Wipfmetz posted:So i guess i'll better use a common condition variable to handover work items to worker threads and use a good oldfashioned bool to indicate "no more work incoming, just return from your thread's mainloop, thank you". (And you shouldn't need any extra objects to wait for the threads to all finish, because there's join for that, assuming when you reach that point you're waiting for all the worker threads or specific worker threads to complete.)
|
# ¿ May 24, 2023 13:39 |
|
LLSix posted:I really like https://en.cppreference.com/w/ as a reference for C & C++. It does a good job of providing in-depth technical details without being as overwhelming as the standards. The cppreference.com site's search tool is terrible but googling cppreference <whatever you need> works pretty well.
|
# ¿ Jul 12, 2023 01:26 |
|
Dijkstracula posted:I've got Opinions on your instructor's decisions there, but I also had Opinions on your other class's instructor's decisions, so I guess at least there's consistency Then I thought it was comparing *two* sorted lists, where one has had one item removed, and looking for what the removed item is, which would be a valid use for a binary search provided duplicate entries are not allowed (because everything to the right of target a[i]!=b[i], and everything to the left a[i]==b[i]). Then I saw that was not the case and I thought for a moment "oh but if you find a missing at 50% and look left and find another at 25% then you don't have to look at the ones in between so there is a saving after all". Then I realized that no, you still have to look at the 25% to the left (if there's no others missing in there), and if you just did a linear scan from the left you'd look at those and then stop at the first so it would be shorter by one lookup in this case, and there is no case where the binary search reads fewer items than the linear search from left for this algorithm, jesus, is this instructor *intentionally* making nonsense problems, is his entire course like a piece of performance art commenting on blockchain and AI?
|
# ¿ Sep 25, 2023 23:38 |
|
Xerophyte posted:Nah, binary search works fine here. The proposed algorithm effectively splits the array into two half-sized subarrays, tests (in constant time) if there's a missing number in the left sublist,
|
# ¿ Sep 26, 2023 01:15 |
|
It's not really clear what you're asking for - you want something to happen after something else completes, but on what thread? If you want it to happen on a specific thread, that waits until that specific thing is done, you want a std::future/promise. If you want it to happen on a specific thread or pool of threads, that does various pieces of work after various events, you want a dispatcher sort of pattern, where the thing that completes the work pushes a lambda (or some other sort of object indicating the work to be done) onto a queue that wakes up a thread to consume from the queue if one is blocked, or if all the threads are busy then the action will wait 'til one of those threads is done with what it's already doing. If you want it to do the action on the same thread as the one that was doing the thing you were waiting for to complete, you probably just want to have that thread call a callback when it's done. I don't think there's a standard library for dispatcher-queue, but it's pretty simple, you guard a deque with a mutex and a condition variable, lock-pushback-signal-release to add work, lock-popfront-release to consume work, if there's nothing to popfront you wait for the condition variable and try again, plus you have to figure out your own termination condition and termination behavior.
|
# ¿ Oct 5, 2023 01:40 |
|
Rocko Bonaparte posted:Yeah I was thinking of the JavaScript version, specifically: 1. assign the 'then' action to a thread that is waiting for it (that's what it effectively does already) 2. do the 'then' action on the thread that's completing the future (this is just a callback, you don't need a future/promise at all for this, just pass in a std::function and name it "then") 3. post the 'then' action to a threadpool or something (which requires a whole dispatcher setup that doesn't by default exist) 4. start a whole new thread just to handle the 'then' action. 1 is what it already is, 2 doesn't need a future at all, 3 subtly requires a bunch of prerequisites, and 4 is a performance disaster. I guess the thing that we're familiar with from Javascript-style 'then' is that there's just one thread and at some point it's in a "not doing anything" state, and that's when any async-resolved 'then' actions get resolved. But C++ doesn't have a "not doing anything" state by default, so there's no common place to resolve these things.
|
# ¿ Oct 5, 2023 05:07 |
|
leper khan posted:Opening on same, closing on its own line. Don't curry Also, sometimes opening and closing both on the same line if it's short.
|
# ¿ Nov 3, 2023 23:06 |
|
Presto posted:No it isn't because those go on the line after the brace. code:
|
# ¿ Nov 4, 2023 14:40 |
|
ShoulderDaemon posted:When I first started at my current employer, one of the simulators I became owner of was consistently formatted like this:
|
# ¿ Nov 5, 2023 00:27 |
|
epswing posted:I've got an C++ app (MFC running on Windows 10/11) every few seconds polling (via libcurl) an ASP.NET site running on IIS on the same PC. I'd like to stop polling and reverse this communication, by having the ASP site signal the C++ app somehow. The content of the signal is small (a few bytes) and not frequent (once every few seconds). SignalR comes to mind but the .NET Framework (i.e. pre .NET Core) cpp SignalR client library seems abandoned. Named pipe? Socket? MSMQ? Is there a recommended/modern mechanism for this?
|
# ¿ Nov 22, 2023 21:36 |
|
Dijkstracula posted:(I've seen this done with a bunch of different mechanisms: UNIX domain sockets, UDP datagram, long-polling over TCP, websockets. All things being equal, none are any better or worse than any other, but certain choices (websockets, I imagine, in particular) might be more difficult to do in MFC-land.) Also this now has me wondering would long-polling over http3 (which is not over TCP) make sense or not. Edit: also realizing maybe long-polling wasn't quite the right term for what I meant, that's generally for notification of rare updates. HTTP subscription for fast small updates is *similar*, but instead of one-poll one-eventual-reply, it's one-poll, stream-replies, re-poll-when-client-or-server-timeout-nears. I don't know if there's a name for that. It also has a good chance of playing poorly with proxies. roomforthetuna fucked around with this message at 23:43 on Nov 22, 2023 |
# ¿ Nov 22, 2023 23:40 |
|
Joke answer for interprocess communication, grpc. Not a joke in that it wouldn't work, but a joke in that every 3 months you'd have to restructure your entire build system because Google decided to change protobuf and/or absl and/or grpc again in a not-backwards-compatible way AGAIN and it's a massive pain in the arse.
|
# ¿ Nov 23, 2023 13:35 |
|
Subjunctive posted:Wait, I thought protobuf was really stable? quote:But also, just because Google does something doesn’t mean you have to upgrade unless they’re fixing something that’s an issue for you. Your software works, you can leave it alone. (Having any dependency that uses tensorflow would be another thing that probably would force you to update the whole chain of google libraries.) Though actually not just a work rant - I moved my personal project from protobuf to flatbuffers in the hope of escaping google-churn, and even that wasn't stable so I ended up just writing my own serialization API that's better for my purposes in literally every way anyway. (Smaller generated code, simpler API, smaller serialized size, and embedded types.)
|
# ¿ Nov 23, 2023 20:17 |
|
Jabor posted:None of those things you've mentioned change the wire format though? Like, if you haven't changed the proto definition itself, you can upgrade the libraries and then keep reading protos serialised by the old version (and serialise protos that the old version can read). Unless you're using the weirdo "serialise to json instead of to the actual proto wire format" thing, I dunno how that one works. Google pretty explicitly does not give a poo poo about open source clients of their libraries except *maybe* abseil, and it shows in how they migrate APIs with no kind of bridge deprecation period. Or, perhaps the best example, in how they literally had their protobuf open source library become build-incompatible with the protobuf-javascript open source library that is also owned by Google, and stay that way for over a year, with a front-page readme on the github repo that straight up said as a gently caress you "this is broken, we have this working internally and don't have funding to support the open source version, we expect to have it fixed by November 2022", up until about 2 weeks ago (in November 2023) when it actually updated. So anyway, it was a throwaway joke about overkill using grpc for a task where it wouldn't be necessary or helpful, because it's front-of-mind for me because I've been doing a version update *for over three loving weeks* it's so bad this time. The serious comment I'm making is "if you don't have to, and there's not a really big benefit, don't use a Google library because the future cost is almost certainly higher than it would be for other similar libraries." Edit: We all know this about google products in general of course, but for some reason it remains a bit shocking when it happens with libraries. roomforthetuna fucked around with this message at 03:28 on Nov 24, 2023 |
# ¿ Nov 24, 2023 03:25 |
|
Jabor posted:I guess I misunderstood what you meant by stable serialisation. 1. in old proto, you *could* have the same message serialized different ways, but if you had the message in native form and serialized it, you would always get identical output bytes. This means you could compare two messages with a memcmp of the serialization, or hash it, for example. 2. when map types were added, this changed because a map is an implicitly unordered type. In golang, map types became *intentionally unstable* for serialization, which at the time intentionally broke many existing unit tests even inside google. 3. to compensate for this, a "serialize deterministically" option was added to various serializers, so you could optionally get back the behavior from timepoint 1. This would deterministically sort map types when serializing. 4. when the "any" type was added, it broke the behavior of that option, but the option still exists, making the behavior quite surprising. The TextFormat serializer will, in the current version, serialize deterministically even through Any types, if you set the right options, but the binary serializer does not have the options to do this; if there's a map type inside the message inside an Any field, or if the Any field was previously serialized a different way (as has been valid all along), asking for deterministic serialization will get you nondeterministic serialization. For a fun sequence of events around this, here's a little adventure through the years: https://github.com/envoyproxy/envoy/pull/5814 - "oh no it's not deterministic" https://github.com/protocolbuffers/protobuf/issues/5668 - "you can make it deterministic" "oh yeah, okay, we did that, we're good" https://github.com/protocolbuffers/protobuf/issues/5731 - "actually no we're not, they still aren't deterministic" "oh yeah, Any fields aren't supposed to do that. Try just not using them." https://github.com/envoyproxy/envoy/commit/647aea1d97be232930306183f94536d3e1f7d9ed - "aha, text formatting supports this" https://github.com/envoyproxy/envoy/pull/30761 - "yeah but the performance of that loving sucks"
|
# ¿ Nov 24, 2023 03:52 |
|
Subjunctive posted:I don’t think any of our other in-use serializers are guaranteed to be deterministic, so that’s probably fine for us. Weird that they want it to be deterministic but also not?
|
# ¿ Nov 26, 2023 00:28 |
|
Subjunctive posted:Sorry, I was typing while distracted and forgot to add that per my source they are “not very close”. Unfortunately it doesn't actually work like that because neither of them use the open source repository, they both build in a shared monorepo so the changes are easy and simultaneous, the victim team doesn't even have to fix it themselves, and any breakages are just foisted onto open source users later without much of a care.
|
# ¿ Nov 28, 2023 01:43 |
|
nelson posted:The vector is just the complete data set initialized from a file. Nothing is added or deleted once the file loading is done, although the objects pointed to can be modified in a dynamically determined order, which is what the queue is for. shared_ptr and weak_ptr carry a performance cost that in some circumstances can be significant. I tried turning a horrible construct that used unique_ptr with a custom destructor to make some values shared singletons and others deleteable, into a shared_ptr so the singletons could just duplicate the existing pointer and the deleteable ones would be allocated, because the weird construct was hideously hard to follow and had a whole bunch of issues with having to declare specified destructors everywhere, but the performance with shared_ptr was 3x slower. I did manage to clean it up in the end, but mostly leaving the underlying structure alone and just putting some extra wrappers around it to make it less gross at the use-sites. roomforthetuna fucked around with this message at 01:45 on Dec 13, 2023 |
# ¿ Dec 13, 2023 01:39 |
|
OddObserver posted:It looks like std::shared_ptr requires barriers on refcount decrement, plus also potentially another allocation if you are holding it wrong. There is a fairly common construct, the refcounted-pointer, which is like shared_ptr without the barriers, but it doesn't exist in the standard library. And it still wouldn't be necessary or useful for the context under discussion. I should probably have tried using that in the context I was talking about though, it probably would have been the best of both worlds, and there was already an implementation of it in the project in question. Edit: the other nice thing about mostly using unique_ptr is it gets you into habits that are beneficial for shared_ptr too, because you *can't* forget to std::move a unique_ptr, versus if you don't std::move (when you could have) a shared_ptr then you end up performing extra increments and decrements and barriers.
|
# ¿ Dec 13, 2023 02:30 |
|
Plorkyeran posted:std::make_shared avoids the extra allocation, at the cost of making the allocation stay alive as long as there's any weak_ptrs pointing at it. But that's interesting, I didn't realize weak_ptrs keep the allocated object alive if it's been made with make_shared. It makes sense now you say it, and I guess is mostly unimportant since the weak_ptrs should all eventually get destroyed too anyway, but could be important for a big enough object.
|
# ¿ Dec 13, 2023 14:19 |
|
Volguus posted:Aaah, ok, now I understand the initial statement. Yes, the object is destructed and destructor is called and all, but you're right, the drat thing is still there somewhere. Yeah, you don't want weak_ptr's to be alive for too long.
|
# ¿ Dec 14, 2023 05:25 |
|
Nalin posted:I was mainly complaining about how that specific, very important bit of functionality is something that isn't really known or noticed unless you are reading implementation notes. Especially since many places that teach it basically say its a way to avoid having to ever write "new" in your code. It's just another annoying thing you have to keep in your mind.
|
# ¿ Dec 15, 2023 02:45 |
|
rjmccall posted:std::shared_ptr is definitely a much more abstract type than you might expect: it can either take ownership of an existing allocation or co-allocate, it defaults to using the standard allocator but can work with an arbitrary one, it supports a bunch of related features around weak references, etc. All of those choices have costs that get paid at runtime, and modern designers would probably force that all to be statically explicit. And maybe they’d be right to; I dunno, though. Edit: vvvv I meant supporting "co-allocation or not", like it should be capable of doing both - there's a performance cost in *doing* the worse one, but supporting both means you only do the worse one when that's what the user of the library asked for. As a contrast to supporting thread-safety, weak_ptr or shared_from_this have a performance cost whether you're using them or not. [or, depending on implementation, have no performance cost to not use but are performance-awful when you do use them] roomforthetuna fucked around with this message at 13:58 on Dec 15, 2023 |
# ¿ Dec 15, 2023 03:46 |
|
Xarn posted:I actually ended up writing more about this here.
|
# ¿ Feb 13, 2024 15:02 |
|
Xarn posted:Yep, it is called the spaceship operator and because this is C++, it returns one of std::strong_ordering, std::weak_ordering, and std::partial_ordering. These are not to be mistaken with std::strong_order, std::weak_order and std::partial_order
|
# ¿ Feb 14, 2024 03:31 |
|
Plorkyeran posted:It's certainly not the most elegant thing out there, but it lets algorithms like sorting that need a strong ordering to express that in the type system. std::strong_order and std::strong_ordering both existing is awkward but they are at least things that go together and just not totally unrelated types.
|
# ¿ Feb 14, 2024 14:24 |
|
Nalin posted:lol, watch out though, because if you explicitly default the destructor, copy-constructor, or copy-assignment operator, then the compiler won't implicitly generate a move constructor or move assignment operator for you.
|
# ¿ Feb 17, 2024 15:04 |
|
leper khan posted:backward compatibility before move. if you dont have move and you do have copy, you may be in a codebase that doesnt know about move, and only expects copy. so moving and invalidating will likely break code. esp for libraries written to old standard interoperating with newer code.
|
# ¿ Feb 17, 2024 21:33 |
|
|
# ¿ May 12, 2024 17:05 |
|
I prefer old-school Makefiles for having full control over everything, but if I had to use a modern thing, I like bazel a lot more than CMake. The learning curve to get something just generally working isn't too bad, but once you start getting into needing to be able to import a dependency that isn't something you own, yeah, it can get pretty gnarly. That's probably true of any system.
|
# ¿ May 11, 2024 14:53 |