|
What are the official prerequisites for this course?
|
# ? Nov 15, 2023 16:58 |
|
|
# ? May 26, 2024 10:50 |
|
ultrafilter posted:What are the official prerequisites for this course? Programming 2: Basics, where we learn how classes and pointers etc basic stuff works with c++, and where we make a graphical Snake game with Qt Widgets in the end. I finally got the dynamic graph building to work, when more data is added. It even appears to work as expected based on tests. It took quite a bit of thinking to realise what kind of problems I need to solve for the graph building, and some experiments to implement them. For example if one publication is added to n affiliations, I need a Cartesian product of the affiliations, and add connections between them, or strengthen new ones.. Next I’ll try to implement BFS for the get any path function. Now every node stores an unordered set of ConnectionID’s, which can be used to access the node’s connections form an unordered map. This should allow at least BFS to work! I guess I need to implement some hashing functions for these umaps and usets too, like this: C++ code:
|
# ? Nov 15, 2023 18:43 |
|
That doesn't seem to line up with the assignments you've posted. Are they limited in how many students they can accept to upper level courses? In that case, maybe this is the weedout course. If not, it sounds like it's just incredibly badly taught.
|
# ? Nov 15, 2023 18:47 |
|
ultrafilter posted:That doesn't seem to line up with the assignments you've posted. Are they limited in how many students they can accept to upper level courses? In that case, maybe this is the weedout course. If not, it sounds like it's just incredibly badly taught. Well, actually I just noticed they suggest the book Introduction to Algorithms, Second Edition from Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein as the course book. I don’t remember it being mentioned but I found it from the course description (who reads those?!). They don’t reference the book in any course material, perhaps they assume students will figure it out by themselves what chapters they need to read, if they are in a need for more information. So totally my fault. Sorry for the n00b questions here
|
# ? Nov 15, 2023 18:58 |
|
I've got an C++ app (MFC running on Windows 10/11) every few seconds polling (via libcurl) an ASP.NET site running on IIS on the same PC. I'd like to stop polling and reverse this communication, by having the ASP site signal the C++ app somehow. The content of the signal is small (a few bytes) and not frequent. SignalR comes to mind but the .NET Framework (i.e. pre .NET Core) cpp SignalR client library seems abandoned. Named pipe? Socket? MSMQ? Is there a recommended/modern mechanism for this?
epswing fucked around with this message at 21:58 on Nov 22, 2023 |
# ? Nov 22, 2023 21:33 |
|
epswing posted:I've got an C++ app (MFC running on Windows 10/11) every few seconds polling (via libcurl) an ASP.NET site running on IIS on the same PC. I'd like to stop polling and reverse this communication, by having the ASP site signal the C++ app somehow. The content of the signal is small (a few bytes) and not frequent (once every few seconds). SignalR comes to mind but the .NET Framework (i.e. pre .NET Core) cpp SignalR client library seems abandoned. Named pipe? Socket? MSMQ? Is there a recommended/modern mechanism for this?
|
# ? Nov 22, 2023 21:36 |
|
if "thing being polled" is always going to live on the same host, and both programs have access to the file system, you could drop a flag file and use inotify to watch for the file being created/modified/whatever. (I've seen this done with a bunch of different mechanisms: UNIX domain sockets, UDP datagram, long-polling over TCP, websockets. All things being equal, none are any better or worse than any other, but certain choices (websockets, I imagine, in particular) might be more difficult to do in MFC-land.)
|
# ? Nov 22, 2023 21:51 |
Also, if it's happening on a single machine within the same login session, you can also consider just using Win32 messages (SendMessage or PostMessage) if the data is small. Or you can expose a local out-of-process COM server from one of the two ends and have the other register a callback object with the other. Win32 messages is perhaps not "modern", but it's very robust and well-understood in my experience, and definitely not going out of support any time soon. Neither is COM, but it definitely requires a bunch more being familiar with various concepts, but if you do it right then it gives you a very flexible RPC interface.
|
|
# ? Nov 22, 2023 22:19 |
|
MFC is still a thing?
|
# ? Nov 22, 2023 22:51 |
|
Dijkstracula posted:(I've seen this done with a bunch of different mechanisms: UNIX domain sockets, UDP datagram, long-polling over TCP, websockets. All things being equal, none are any better or worse than any other, but certain choices (websockets, I imagine, in particular) might be more difficult to do in MFC-land.) Also this now has me wondering would long-polling over http3 (which is not over TCP) make sense or not. Edit: also realizing maybe long-polling wasn't quite the right term for what I meant, that's generally for notification of rare updates. HTTP subscription for fast small updates is *similar*, but instead of one-poll one-eventual-reply, it's one-poll, stream-replies, re-poll-when-client-or-server-timeout-nears. I don't know if there's a name for that. It also has a good chance of playing poorly with proxies. roomforthetuna fucked around with this message at 23:43 on Nov 22, 2023 |
# ? Nov 22, 2023 23:40 |
|
epswing posted:I've got an C++ app (MFC running on Windows 10/11) every few seconds polling (via libcurl) an ASP.NET site running on IIS on the same PC. I'd like to stop polling and reverse this communication, by having the ASP site signal the C++ app somehow. The content of the signal is small (a few bytes) and not frequent. SignalR comes to mind but the .NET Framework (i.e. pre .NET Core) cpp SignalR client library seems abandoned. Named pipe? Socket? MSMQ? Is there a recommended/modern mechanism for this? I wouldn't use signalR because you're a native app and signalr is more oriented towards making it so browsers don't have to poll. Also, I hate signalR. Named pipe will work fine in your scenario since everything is on one machine. If you ever plan on making it so the app and IIS live on different servers then you might be better off with socket. Message queue is another way that would work but seems to be overkill for your use case. You could potentially get any of these solutions to work though, I think named pipe is easiest so that's my vote.
|
# ? Nov 23, 2023 07:22 |
|
This discussion makes me think of 0mq. Did anyone ever take/use it seriously? I enjoyed reading their docs but never found a use for it.
|
# ? Nov 23, 2023 07:29 |
Named pipes can also work over network, but it's probably difficult to ge it just right if you don't have both machines joined to the same domaine. And if you expose a pipe to the network, you need to mess around with their ACLs too.
|
|
# ? Nov 23, 2023 08:04 |
|
roomforthetuna posted:Also this now has me wondering would long-polling over http3 (which is not over TCP) make sense or not. Pretty sure in HTTP/3 the server can just straight up send you stuff without being asked first.
|
# ? Nov 23, 2023 09:40 |
|
Joke answer for interprocess communication, grpc. Not a joke in that it wouldn't work, but a joke in that every 3 months you'd have to restructure your entire build system because Google decided to change protobuf and/or absl and/or grpc again in a not-backwards-compatible way AGAIN and it's a massive pain in the arse.
|
# ? Nov 23, 2023 13:35 |
|
roomforthetuna posted:I would argue that UDP datagram is worse in most situations because you *generally* don't want to miss messages, or implement your own thing for detecting and re-sending potentially missed messages e: but yeah the point of my list was to say "hey any number of solutions will do" Dijkstracula fucked around with this message at 15:55 on Nov 23, 2023 |
# ? Nov 23, 2023 15:46 |
|
roomforthetuna posted:Joke answer for interprocess communication, grpc. Wait, I thought protobuf was really stable? But also, just because Google does something doesn’t mean you have to upgrade unless they’re fixing something that’s an issue for you. Your software works, you can leave it alone.
|
# ? Nov 23, 2023 16:13 |
|
pokeyman posted:This discussion makes me think of 0mq. Did anyone ever take/use it seriously? I enjoyed reading their docs but never found a use for it.
|
# ? Nov 23, 2023 18:30 |
|
Subjunctive posted:Wait, I thought protobuf was really stable? quote:But also, just because Google does something doesn’t mean you have to upgrade unless they’re fixing something that’s an issue for you. Your software works, you can leave it alone. (Having any dependency that uses tensorflow would be another thing that probably would force you to update the whole chain of google libraries.) Though actually not just a work rant - I moved my personal project from protobuf to flatbuffers in the hope of escaping google-churn, and even that wasn't stable so I ended up just writing my own serialization API that's better for my purposes in literally every way anyway. (Smaller generated code, simpler API, smaller serialized size, and embedded types.)
|
# ? Nov 23, 2023 20:17 |
|
None of those things you've mentioned change the wire format though? Like, if you haven't changed the proto definition itself, you can upgrade the libraries and then keep reading protos serialised by the old version (and serialise protos that the old version can read). Unless you're using the weirdo "serialise to json instead of to the actual proto wire format" thing, I dunno how that one works. proto3 is only a big change if you actually choose to use it, and no-one's forcing you to do that - you can keep using your existing proto2-formatted definitions and they still compile to the same thing.
|
# ? Nov 23, 2023 23:24 |
|
Jabor posted:None of those things you've mentioned change the wire format though? Like, if you haven't changed the proto definition itself, you can upgrade the libraries and then keep reading protos serialised by the old version (and serialise protos that the old version can read). Unless you're using the weirdo "serialise to json instead of to the actual proto wire format" thing, I dunno how that one works. Google pretty explicitly does not give a poo poo about open source clients of their libraries except *maybe* abseil, and it shows in how they migrate APIs with no kind of bridge deprecation period. Or, perhaps the best example, in how they literally had their protobuf open source library become build-incompatible with the protobuf-javascript open source library that is also owned by Google, and stay that way for over a year, with a front-page readme on the github repo that straight up said as a gently caress you "this is broken, we have this working internally and don't have funding to support the open source version, we expect to have it fixed by November 2022", up until about 2 weeks ago (in November 2023) when it actually updated. So anyway, it was a throwaway joke about overkill using grpc for a task where it wouldn't be necessary or helpful, because it's front-of-mind for me because I've been doing a version update *for over three loving weeks* it's so bad this time. The serious comment I'm making is "if you don't have to, and there's not a really big benefit, don't use a Google library because the future cost is almost certainly higher than it would be for other similar libraries." Edit: We all know this about google products in general of course, but for some reason it remains a bit shocking when it happens with libraries. roomforthetuna fucked around with this message at 03:28 on Nov 24, 2023 |
# ? Nov 24, 2023 03:25 |
|
I guess I misunderstood what you meant by stable serialisation.
|
# ? Nov 24, 2023 03:30 |
|
Jabor posted:I guess I misunderstood what you meant by stable serialisation. 1. in old proto, you *could* have the same message serialized different ways, but if you had the message in native form and serialized it, you would always get identical output bytes. This means you could compare two messages with a memcmp of the serialization, or hash it, for example. 2. when map types were added, this changed because a map is an implicitly unordered type. In golang, map types became *intentionally unstable* for serialization, which at the time intentionally broke many existing unit tests even inside google. 3. to compensate for this, a "serialize deterministically" option was added to various serializers, so you could optionally get back the behavior from timepoint 1. This would deterministically sort map types when serializing. 4. when the "any" type was added, it broke the behavior of that option, but the option still exists, making the behavior quite surprising. The TextFormat serializer will, in the current version, serialize deterministically even through Any types, if you set the right options, but the binary serializer does not have the options to do this; if there's a map type inside the message inside an Any field, or if the Any field was previously serialized a different way (as has been valid all along), asking for deterministic serialization will get you nondeterministic serialization. For a fun sequence of events around this, here's a little adventure through the years: https://github.com/envoyproxy/envoy/pull/5814 - "oh no it's not deterministic" https://github.com/protocolbuffers/protobuf/issues/5668 - "you can make it deterministic" "oh yeah, okay, we did that, we're good" https://github.com/protocolbuffers/protobuf/issues/5731 - "actually no we're not, they still aren't deterministic" "oh yeah, Any fields aren't supposed to do that. Try just not using them." https://github.com/envoyproxy/envoy/commit/647aea1d97be232930306183f94536d3e1f7d9ed - "aha, text formatting supports this" https://github.com/envoyproxy/envoy/pull/30761 - "yeah but the performance of that loving sucks"
|
# ? Nov 24, 2023 03:52 |
|
I'm real happy that I convinced an architect of a project I worked on 8 years ago not to use protobuf, need to give past me a high-five.
|
# ? Nov 24, 2023 04:56 |
|
I don’t think any of our other in-use serializers are guaranteed to be deterministic, so that’s probably fine for us. Weird that they want it to be deterministic but also not?
|
# ? Nov 25, 2023 23:35 |
|
Subjunctive posted:I don’t think any of our other in-use serializers are guaranteed to be deterministic, so that’s probably fine for us. Weird that they want it to be deterministic but also not?
|
# ? Nov 26, 2023 00:28 |
|
Today I learned that the gRPC and protobuf teams at Google are separate!
|
# ? Nov 27, 2023 18:54 |
|
Subjunctive posted:Today I learned that the gRPC and protobuf teams at Google are separate! Why wouldn't they be?
|
# ? Nov 27, 2023 19:37 |
|
OddObserver posted:Why wouldn't they be? Sorry, I was typing while distracted and forgot to add that per my source they are “not very close”. I could definitely see the same team doing both though, and that would probably be my instinct rather than staffing two teams each with all the language and platform expertise. It seems like the rate of change to that stuff is (should be?) low and want to evolve in a coordinated way, but maybe that’s a misconception.
|
# ? Nov 27, 2023 20:55 |
|
Subjunctive posted:Sorry, I was typing while distracted and forgot to add that per my source they are “not very close”. Thing of Google as a government. Why buy one when you can buy two?
|
# ? Nov 27, 2023 21:48 |
|
Subjunctive posted:Sorry, I was typing while distracted and forgot to add that per my source they are “not very close”. Unfortunately it doesn't actually work like that because neither of them use the open source repository, they both build in a shared monorepo so the changes are easy and simultaneous, the victim team doesn't even have to fix it themselves, and any breakages are just foisted onto open source users later without much of a care.
|
# ? Nov 28, 2023 01:43 |
|
So I have this in my codeC++ code:
C++ code:
Today I got a bug report from someone building for MacOS, targetting 10.9, that the build fails with code:
|
# ? Dec 11, 2023 11:30 |
|
Xarn posted:So I have this in my code Did you read the docs that were linked? That person isn't just building on a Mac, they're building inside of conda, which is its own incredibly hosed up environment. On Linux I've run into a whole host of problems because by default on Linux Conda ships and compiles against a CentOS 6 sysroot, which isn't actually C++11 compatible. They considered moving to a Centos 7 Sysroot last year in 2022, and decided that it was too soon and to stick with Centos 6. The docs on that link say that the build failure only happened because quote:The libc++ library uses Clang availability annotations to mark certain symbols as unavailable when targeting versions of macOS that ship with a system libc++ that do not contain them. Clang always assumes that the system libc++ is used. It looks like conda on mac as configured uses the system Clang but ships its own libcxx. System Clang pre-emptively decides to tell you that uncaught_exceptions is not available because they are not in the system libc++, but if you're building in conda you aren't using the system libc++. Conda has so many nasty edge cases, welcome to this one. Also, Mac OS 10.9 launched in 2013, support ended in 2016, tell them good luck with their problems with their OS that has been out of support for 7 years. Edit: All this to say, if you want this to work in conda on Mac OS, add -D_LIBCPP_DISABLE_AVAILABILITY to your CXXFLAGS. I've also had a lot of maintainers just tell me "conda isn't supported, gently caress off, use a real distro" because at its core, conda has turned into the stupidest possible rolling release distro of linux. Second edit: The other way that I would tackle this is that you can also use the non-Apple vanilla clang compiler on Mac OS in Conda by adding "llvm-openmp" to your dependencies in the conda meta.yaml. Apple clang does a lot of weird stuff so people tend not to use the system clang in order to make things work more consistently across Mac OS / Linux. Twerk from Home fucked around with this message at 16:16 on Dec 11, 2023 |
# ? Dec 11, 2023 16:09 |
|
I've read that specific section. But as I understand it, the sequence of events goes * Conda targets some ancient version of MacOS. Not great, but also not a crime. * libc++ has the ability to tell you that such and such API is not available in the target version of MacOS, because it knows it is tightly bound to specific version. This is actually really nice. * libc++ happily defines the macro for a feature it will not let you use when targetting older versions. <-- this is actually terrible and can gently caress right off.
|
# ? Dec 11, 2023 16:51 |
|
Xarn posted:I've read that specific section. I haven't run into this landmine personally, but my impression of what's happening is: * Conda targets some ancient version of MacOS. Not great, but also not a crime. * Apple clang has the ability to tell you that such and such API is not available in the target version of MacOS's system libc++, because it knows it is tightly bound to specific version. This is actually really nice. * You aren't actually using system libc++! You are linking against a newer version of libc++ that conda-forge provided, that does have the features you want. The macro is defined because those symbols are there, they're usable. However, system clang doesn't know that you're using a different libc++, and system clang tells you that it's not available before actually trying to link and finding those symbols. Edit: the suggested fix is just telling clang not to check, because when you link it will actually work. Twerk from Home fucked around with this message at 17:00 on Dec 11, 2023 |
# ? Dec 11, 2023 16:57 |
|
This specific problem is Conda's fault. They're shipping their own copy of libc++ to enable backdeployment to older OS versions, including their own copy of the headers, but they haven't updated the headers to reflect that and instead the headers are configured for using the system deployment. Bundling libc++ isn't a particularly exotic thing and they're just doing it wrong; they're supposed to be defining _LIBCPP_HAS_NO_VENDOR_AVAILABILITY_ANNOTATIONS. Apple's availability system is really awesome for Apple SDKs, but for complicated reasons doesn't really work for libc++. For most symbols, the availability check is a warning that's silenced by guarding uses with if (__builtin_available(macOS 13.0, *)) { ... } and handling the case where it's not available at runtime. If you ignore the warning and hit the use of the symbol on a platform where it's not available you crash. A few c++ symbols are instead a strict availability check and don't support the runtime check. Usually this is for things like vtables that can't be weakly linked, and I'm not sure why uncaught_exceptions is one of them.
|
# ? Dec 11, 2023 18:04 |
|
Availability is a more sophisticated feature than you’re giving it credit for. “That feature requires macOS Y, but your deployment target is older than Y” isn’t an unresolvable problem because you can dynamically test the OS version you’re running on with if (__builtin_available(macOS Y)), and the availability warning will be suppressed within that block. Of course, that means you have to have a fallback path for when you happen to run on an older OS. If you’ll never actually do that, you should just increase your deployment target. That’s not integrated with feature-test macros because the right solution probably never to compile as if the feature is unconditionally unavailable. It would also have a bunch of practical difficulties around lexer/parser layering and the presumption that those macros give consistent results everywhere in the TU. If you’re building with the macOS SDK and using a macOS target triple but not actually targeting macOS, that seems like a You Problem.
|
# ? Dec 11, 2023 18:23 |
|
Plorkyeran posted:This specific problem is Conda's fault. They're shipping their own copy of libc++ to enable backdeployment to older OS versions, including their own copy of the headers, but they haven't updated the headers to reflect that and instead the headers are configured for using the system deployment. Bundling libc++ isn't a particularly exotic thing and they're just doing it wrong; they're supposed to be defining _LIBCPP_HAS_NO_VENDOR_AVAILABILITY_ANNOTATIONS. This would have to go into the header that conda-forge bundles with their newer libstdc++? This is probably an easy fix and I've gotten them to accept all sorts of other stuff too. The conda-forge project just happened, nobody is really making sure that everything works or is done the right way. rjmccall posted:If you’re building with the macOS SDK and using a macOS target triple but not actually targeting macOS, that seems like a You Problem. Do you mean compiling stuff on Mac OS in the conda forge environment? They ship their own libcxx, headers, all libs, sometimes their own compilers but sometimes the system compiler. I am in past my depth (like everyone who touches conda, it seems) and don't know quite what you mean.
|
# ? Dec 11, 2023 19:53 |
|
Twerk from Home posted:This would have to go into the header that conda-forge bundles with their newer libstdc++? This is probably an easy fix and I've gotten them to accept all sorts of other stuff too. The conda-forge project just happened, nobody is really making sure that everything works or is done the right way. They should set -D LIBCXX_ENABLE_VENDOR_AVAILABILITY_ANNOTATIONS=NO when building libc++, which will result in the generated config header having the appropriate define set.
|
# ? Dec 11, 2023 20:22 |
|
|
# ? May 26, 2024 10:50 |
|
Weird question: does the first function called in a C++ program absolutely have to be called main, or can we rename it to something else? The reason I ask is because I just started some work on an embedded microcontroller with an asymmetric dual core, one is a (relatively) beefy Cortex M7 and the other is a weaker Cortex M4 on the same silicon. When kickstarting the chip from power on and doing all of the stuff to init the processors before handing off fully to the software side I end up with two functions, both called main(), when a more natural naming might be main_m7() and main_m4(). My IDE absolutely rejects me naming them that, but having two different things with the same name is mildly confusing in the code. I suppose I can just make each main() call a pass-thru function to main_m7 and main_m4, but it's just an interesting thought experiment to consider naming them something else natively.
|
# ? Dec 12, 2023 05:55 |