Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zopotantor
Feb 24, 2013

...und ist er drin dann lassen wir ihn niemals wieder raus...

rjmccall posted:

Multiple interface inheritance is usually a bad idea; if your interface is really that complicated, consider having a separate table of traits that you can extract statically or dynamically from the complete type.

I was thinking along the lines of C++ concepts, or Swift protocols, where you can take one of your classes/structs and say, "OK, now make this also EqualityComparable/Identifiable/whatever" so that it becomes usable in a bunch of other APIs.

Adbot
ADBOT LOVES YOU

baby puzzle
Jun 3, 2011

I'll Sequence your Storm.
I'm not very smart but it seems that multiple inheritance caused weird things like objects having multiple different this addresses... somehow, which broke basic pointer arithmetic. And that is when I learned to not do that thing.

Computer viking
May 30, 2011
Now with less breakage.

rjmccall posted:

Multiple interface inheritance means that there are multiple unrelated abstract interfaces being satisfied by a single object. That is already complicated.

Are we thinking of different things? I really don't see the complication in "this has .serialize(), and separately it also has .equals()". Or is that just multiple interface implementation, and you're thinking of the case where ISerializable inherits IEnterpriseObject and ICancellable?


I'll admit I'm mostly writing R these days, where the most used OO variant is, uh, minimalistic. It's also sort of funny. Everything outside primitives can be tagged with arbitrary attributes. One often used is "class". Many generic functions, like print, are just dispatchers: if you have an obj with class="table", it checks if there is a print.table() function drifting around in the global namespace and forwards to that. If not, there is a print.generic().

The equivalent of multiple interfaces in this world would be to write your own print.classname and summary.classname and plot.classname and whatever else you may need - and the equivalent of defining an interface is to write a function that dispatches based on the value of "class". Completely rear end backwards, but in practice it does work.

Computer viking fucked around with this message at 09:32 on Sep 2, 2020

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Zopotantor posted:

I was thinking along the lines of C++ concepts, or Swift protocols, where you can take one of your classes/structs and say, "OK, now make this also EqualityComparable/Identifiable/whatever" so that it becomes usable in a bunch of other APIs.

The basic idea of a single type satisfying several conceptual interfaces is fine. If you want to abstract over those interfaces, you can have abstractions that are either intrinsic or extrinsic to the type. Things like C++ concepts, Rust traits, and Swift protocols are extrinsic; Abstract/virtual method overrides, Java interfaces, and so on are intrinsic. Intrinsic interface abstraction is generally really frustrating and limiting because it doesn’t compose well; you get a lot of unpleasant artifacts that way.

Star War Sex Parrot
Oct 2, 2003

Is there a compelling reason for nested classes (beyond some notion of Parent::IsRelatedToThisChild)? They can't be forward declared, which is a little annoying in our codebase where we're trying to be restrictive about included files (especially in other headers).

Star War Sex Parrot fucked around with this message at 16:44 on Sep 9, 2020

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
They can be convenient sometimes, but you never need them.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


Ralith posted:

They can be convenient sometimes, but you never need them.

It's just a method of organizing your code inside a class. The upside is that if a class is declared private inside another class, there's absolutely no way for anything outside that class to access it without modifying the declaration of the outer class.

I can't think of a situation off the top of my head where that's a really good design, but it's there if you want it.

more falafel please
Feb 26, 2005

forums poster

Star War Sex Parrot posted:

Is there a compelling reason for nested classes (beyond some notion of Parent::IsRelatedToThisChild)? They can't be forward declared, which is a little annoying in our codebase where we're trying to be restrictive about included files (especially in other headers).

I might nest implementation details. public/protected/private accessors communicate intent -- only things that are public are meant to be used by users of the class. They're bad names, honestly, I prefer something like interface/implementation, to make it clear: here's the things objects of this class have for you to work with, and then off here on the side, here's how they do it.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today

ultrafilter posted:

I can't think of a situation off the top of my head where that's a really good design, but it's there if you want it.

Little helper structs or similar to be used as member variables come up often enough for me. No forward declaration issues there, though.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

ultrafilter posted:

It's just a method of organizing your code inside a class. The upside is that if a class is declared private inside another class, there's absolutely no way for anything outside that class to access it without modifying the declaration of the outer class.
Except if you get an instance of it from a function, for example.
code:
 class Foo {
 private:
     class Bar {
         public:
         Bar(int v) : x(v) {}
         int x;
     };
 public:
     Bar getBar() {return Bar(5);}
 };
 
int main()
{
    Foo foo;
    auto bar = foo.getBar();
    cout << bar.x;
    return 0;
}

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


I guess that falls under the category of things you can do but why.

Xerophyte
Mar 17, 2008

This space intentionally left blank
I'm not sure about compelling but it's pretty common for parameter objects, e.g.
C++ code:
class Foo {
 struct InitializationParameters {
    // A bunch of parameters with initial values, possibly range checks to ensure validity, etc.
  };

  Foo (const InitializationParameters& params);
}
Arguably the horror is having so many constructor parameters that it's actually inconvenient to just list them in the constructors, but sometimes a parameter object is the best of a bad bunch and declaring it inside its class makes sense since it isn't useful in any other context.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

ultrafilter posted:

I guess that falls under the category of things you can do but why.

I mean, this is the C++ thread, that ship has sailed decades ago.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


C++20 approved

matti
Mar 31, 2019

code:
// bool test_safe_range(long value);
// bool test_safe_range(long long value); [C++11]
//
// Return whether `value' is in the range where any arithmetic operator is
// guaranteed to not cause overflow on any target platform.
//
// FIXME: I actually have the feeling this isn't safe as a predicate for
// using an operator (the conditional block asserts that this always returns
// true). Bignum algorithms may be required. See regehr.org.

inline bool test_safe_range(long value)
{
        return value <= SAFE_LONG_MAXIMUM && value >= SAFE_LONG_MINIMUM;
}
I'm little fried right now but can someone double check that I'm not about to overcomplicate this?

SAFE_LONG_MAXIMUM and SAFE_LONG_MINIMUM are sqrt(2^31-1) and -sqrt(2^31) respectively

e: Is the MSVC optimizer less aggressive in this regard? Then I could just use GNU builtin functions where needed.

matti fucked around with this message at 15:56 on Sep 17, 2020

Captain Cappy
Aug 7, 2008

Please rename the function can_be_squared_without_overflowing_a_thirty_two_bit_integer, thank you.

What are you worried about overcomplicating?

matti
Mar 31, 2019

Captain Cappy posted:

Please rename the function can_be_squared_without_overflowing_a_thirty_two_bit_integer, thank you.

What are you worried about overcomplicating?

An aggressively optimizing compiler (I'm talking of GNU) is allowed to assume that signed integers never overflow and so may optimize out conditionals that'd check for it beforehand.

Now that I've had time to ponder it a bit more clearly (lol as if) I realize it's not an issue here and I'm home safe I think.

thanks for letting me rubber duck a lil

Foxfire_
Nov 8, 2010

matti posted:

An aggressively optimizing compiler (I'm talking of GNU) is allowed to assume that signed integers never overflow and so may optimize out conditionals that'd check for it beforehand.
It may assume that signed integer overflow never happens, which may or may not allow removing tests.

code:
int Foo(int input)
{
    if (input == INT_MAX)   // Line B
    {
        print("Butts\n");
    }
    const auto output = input + 1;    // Line A
    return output;
}
- Line A always runs.
- Signed overflow never happens, so the computation on that line never overflows
- Therefore, input must always be smaller to INT_MAX on input to the function even though the type is bigger
- On line B, we already knew that it wasn't equal to INT_MAX.
- So the condition is always false and can be deleted

vs

code:
int Foo(int input)
{
    if (input == INT_MAX)   // Line B
    {
        print("Butts\n");
        return -1;
    }
    const auto output = input + 1;    // Line A
    return output;
}
- Now Line A doesn't always run. We don't know anything about the possible range of input
- It might be INT_MAX on line B, so the condition may or may not be true and can't be removed

Falcorum
Oct 21, 2010

matti posted:

An aggressively optimizing compiler (I'm talking of GNU) is allowed to assume that signed integers never overflow and so may optimize out conditionals that'd check for it beforehand.

Now that I've had time to ponder it a bit more clearly (lol as if) I realize it's not an issue here and I'm home safe I think.

thanks for letting me rubber duck a lil

If you're worrying about compiler optimisations, it's always worth testing and finding out. In general it tends to be the less obvious "the compiler can optimise this" that'll catch you.

Had a fun one where we had classes A1, A2, A3, A4 and the only difference between them was changing what some of the methods did, and since all the methods were inline-able and the class signatures were identical, MSVC just decided to elide all of them except one. That took forever to track down since things still mostly worked except for the occasional crash because it should have accessed one of the other functions.

Xeom
Mar 16, 2007
.

Xeom fucked around with this message at 16:53 on Sep 23, 2020

fankey
Aug 31, 2001

I'm trying to make a simple TLS client/server using Qt.

Server looks like
code:
#include "myserver.h"
#include <QtNetwork/QSslSocket>
#include <QtNetwork/QTcpSocket>
#include <QtNetwork/QSslPreSharedKeyAuthenticator>


class SslServer : public QTcpServer
{

protected:
    void incomingConnection(qintptr socketDescriptor)
    {
        QSslSocket *socket = new QSslSocket;
        socket->setPeerVerifyMode(QSslSocket::VerifyNone);
        socket->setProtocol(QSsl::SslV2);

        QObject::connect(
            socket, &QSslSocket::connected,
            []() { qDebug() << "CONNECTED"; }
        );
        QObject::connect(
            socket, &QSslSocket::encrypted,
            []() { qDebug() << "ENCRYPTED"; }
        );
        QObject::connect(
            socket, &QSslSocket::modeChanged,
            [](QSslSocket::SslMode mode) { qDebug() << "MODE " << mode; }
        );
        QObject::connect(
            socket, &QSslSocket::readyRead,
            [socket]() { qDebug() << "GOT DATA " << socket->readAll(); }
        );
        QObject::connect(
            socket, &QSslSocket::preSharedKeyAuthenticationRequired,
            [](QSslPreSharedKeyAuthenticator *authenticator) {
            qDebug() << "AUTH";
        }
        );
        QObject::connect(
            socket, &QSslSocket::disconnected,
            [socket]() {
                qDebug() << "DISCONNECTED ";
                socket->deleteLater();
            }
        );
        QObject::connect(
            socket, static_cast<void ( QSslSocket::* )( QAbstractSocket::SocketError )>( &QAbstractSocket::error ),
            [socket]( QAbstractSocket::SocketError ) {
                qDebug() << "ERROR " << socket->errorString();
                socket->deleteLater();
            }
        );
        if (socket->setSocketDescriptor(socketDescriptor)) {
            addPendingConnection(socket);
            //connect(serverSocket, &QSslSocket::encrypted, this, &SslServer::ready);
            qDebug() << "starting encryption";
            socket->startServerEncryption();
        } else {
            delete socket;
        }
    }
};

MyServer::MyServer(QObject *parent) : QObject(parent)
{
    server = new SslServer();
    server->listen(QHostAddress::Any, 4545);
    qDebug() << "ssl Is " << QSslSocket::supportsSsl();
    qDebug() << QSslSocket::sslLibraryBuildVersionString();
}
and the client looks like
code:
#include <QCoreApplication>
#include <QtNetwork/QSslSocket>
#include <QtNetwork/QSslPreSharedKeyAuthenticator>


int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    qDebug() << "ssl Is " << QSslSocket::supportsSsl();
    qDebug() << QSslSocket::sslLibraryBuildVersionString();

    QSslSocket* socket = new QSslSocket();
    socket->setProtocol(QSsl::SslV2);
    socket->setPeerVerifyMode(QSslSocket::VerifyNone);

    socket->connectToHostEncrypted("localhost", 4545 );
    QObject::connect(
        socket, &QSslSocket::connected,
        []() { qDebug() << "CONNECTED"; }
    );
    QObject::connect(
        socket, &QSslSocket::encrypted,
        []() { qDebug() << "ENCRYPTED"; }
    );
    QObject::connect(
        socket, &QSslSocket::modeChanged,
        [](QSslSocket::SslMode mode) { qDebug() << "MODE " << mode; }
    );
    QObject::connect(
        socket, &QSslSocket::preSharedKeyAuthenticationRequired,
        [](QSslPreSharedKeyAuthenticator *authenticator) {
        qDebug() << "AUTH";
    });
    QObject::connect(
        socket, &QSslSocket::readyRead,
        [socket]() { qDebug() << "GOT DATA " << socket->readAll(); }
    );
    QObject::connect(
        socket, &QSslSocket::disconnected,
        [socket]() {
            qDebug() << "DISCONNECTED ";
            socket->deleteLater();
        }
    );
    QObject::connect(
        socket, static_cast<void ( QSslSocket::* )( QAbstractSocket::SocketError )>( &QAbstractSocket::error ),
        [socket]( QAbstractSocket::SocketError ) {
            qDebug() << "ERROR " << socket->errorString();
            socket->deleteLater();
        }
    );



    return a.exec();
}
While I'm well aware that encryption without authentication is not security, for this test I want to ignore any certificate verification on either the client or server. I've tried pretty much every protocol and none of them work - and they all have different errors.

using SslV2:
code:
Client:
ssl Is  true
"OpenSSL 1.1.1d  10 Sep 2019"
MODE  1
ERROR  "Error creating SSL context (unsupported protocol)"
ERROR  "Unable to init SSL Context: "
CONNECTED
DISCONNECTED

Server:
ssl Is  true
"OpenSSL 1.1.1d  10 Sep 2019"
MODE  1
ERROR  "Error creating SSL context (unsupported protocol)"
ERROR  "Unable to init SSL Context: "
CONNECTED
DISCONNECTED
TlsV1_2
code:
Client:
ssl Is  true
"OpenSSL 1.1.1d  10 Sep 2019"
MODE  1
CONNECTED
AUTH
ERROR  "Error during SSL handshake: error:141970DF:SSL routines:tls_construct_cke_psk_preamble:psk identity not found"
DISCONNECTED

Server:
ssl Is  true
"OpenSSL 1.1.1d  10 Sep 2019"
starting encryption
MODE  2
ERROR  "The remote host closed the connection"
DISCONNECTED
I feel like I'm missing something basic. I thought setting the VerifyMode to VerifyNone would ignore any verification on either end but I see the client getting the preSharedKeyAuthenticationRequired callback which I wouldn't think would happen. In .NET land I can pass a RemoteCertificateValidationCallback into the SslStream constructor that can aways return true to accept any server cert but I don't see that object in Qt. This is using Qt 5.12.19 on Windows if that matters.

MrMoo
Sep 14, 2000

That all assumes certs are registered with the system? SSL will be disabled by now, alongside TLS 1.0 is unlikely, leaving 1.1, 1.2, and the API may not support 1.3 yet.

fankey
Aug 31, 2001

I'm assuming setPeerVerifyMode(QSslSocket::VerifyNone) would remove any need to verify the cert and not require them to be installed on the system.

I'm thinking the problem is that a cipher requiring PSK is being selected. I tried just specifying a single cipher ( TLS_AES_128_GCM_SHA256 ) on both sides and now it errors out with a
code:
Error during SSL handshake: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher
EDIT: got it working. Once I supplied a self signed cert/key on the server end it selected a non-PSK cipher which then worked without any shared secret.

fankey fucked around with this message at 19:17 on Sep 24, 2020

Xeom
Mar 16, 2007
I dunno if this is the correct thread, but I've been building my C and C++ projects as one big executable with no linking just a clean build every time. As expected my compile times started to get stupid long, so I decided I would go ahead and try to step up my build game. Not gonna lie this has absolutely broken me and has removed any desire to ever code again. I don't know how many build systems I've looked at, but each one seems more complex and stupid than the other.

Is there really no way to just loving tell the compiler here is the a folder with all my sources. Here is a folder with matching .h files. Here is a folder with some libraries that have no matching source files. Finally ill '-I' any other library like freetype.

I dunno man this poo poo is over 40 years old and nobody has figure out a way to just loving give a few folders and then just go? Is this poo poo for loving real?
I feel like I've wasted the last 3 months learning c++. gently caress this poo poo.

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


The problem is that your approach only works for very small projects, and a build system that only supports small projects isn't all that useful. Everything's built to scale.

CMake is probably as close as you're going to get. For a small project where the instructions are just "compile everything in this folder and make an executable out of it", it's pretty simple to come up with something that works.

Brownie
Jul 21, 2007
The Croatian Sensation
I’m also pretty new to C++ and the only build system I’ve found tolerable so far has been Premake. Once I found an example script (BGFX uses a fork of Premake) the DSL is simple enough and lets you do exactly what you describe. The documentation is also sane and mostly clear.

The only thing I still routinely struggle with is shared libs on windows. I just haven’t figured out how to compile one project into a shared lib for use in another without manually copying the resulting .DLL file, which I’m obviously not going to do. Static libs link fine though.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum
CMake is widely used and will help you integrate third party pins, I would recommend you use it if you have a choice. Do a bit of research on modern cmake usage, it isn’t that bad and generally ends up pretty nice! It will even help you integrate test frameworks and linting (clang-tidy)

Xarn
Jun 26, 2015

ultrafilter posted:

The problem is that your approach only works for very small projects, and a build system that only supports small projects isn't all that useful. Everything's built to scale.

CMake is probably as close as you're going to get. For a small project where the instructions are just "compile everything in this folder and make an executable out of it", it's pretty simple to come up with something that works.

You can even use globs.

Most of the time :v:

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

Xeom posted:

Is there really no way to just loving tell the compiler here is the a folder with all my sources. Here is a folder with matching .h files. Here is a folder with some libraries that have no matching source files. Finally ill '-I' any other library like freetype.
A simple Makefile is really pretty good at this if you don't want to deal with CMake or Premake. I feel like CMake and Premake are geared to eventual cross-platform deployment, which means you need to learn more stuff than you really need if all you want is to just run some drat command-lines automatically.

Here's an extract from one of mine that might do the trick for you (I also had some more complicated things in there because it also builds proto files):

code:
SRCDIR = src
OBJDIR = obj
CPPFILES := $(wildcard $(SRCDIR)/*.cpp)
CPPOBJS := $(addprefix $(OBJDIR)/,$(notdir $(CPPFILES:.cpp=.o)))
BINTARGET := bin/yourfilename
LINKTARGETS := $(BINTARGET) -lcapnp -lkj  # switch in your -l libraries here
# Select one of clang++ or g++ as the compiler.
ifneq (, $(shell which clang++))
CC = clang++
else ifneq (, $(shell which g++))
CC = g++
else
$(error "No clang++ or g++ in $(PATH) - $(shell which clang++)")
endif
CFLAGS = -MMD -std=c++11 -Ilibtomcrypt  # switch in your additional -I include paths here

.DEFAULT_GOAL = all

# A wildcard here is fine because the dependencies are all plain files,
# not chains - if the dependencies are being built now, so is the file.
-include $(OBJDIR)/*.d

all:    $(BINTARGET)

$(OBJDIR)/%.o: $(SRCDIR)/%.cpp
        @mkdir -p $(@D)
        $(CC) $(CFLAGS) -c -o $@ $<

$(BINTARGET):        $(CPPOBJS)
        @mkdir -p $(@D)
        $(CC) $(LDFLAGS) -o $@ $(CPPOBJS) $(LINKTARGETS)

clean:
        rm -rf $(OBJDIR)
This doesn't require any reference to the header files because one of the rules in there auto-generates the *.d files which introduces the dependencies on the header files. (If the *.d files aren't present yet then those dependencies don't matter yet because the related cpp file still has to be built anyway.) - essentially the header file references *are* the #includes in the cpp files.

It's not as automatically versatile as CMake but it's also more inclined to stay out of your way and do just what you ask. And it's fast, and doesn't require installing anything that isn't pretty much always installed.

MrMoo
Sep 14, 2000

That only includes dependency consumption, not generation, I expect to see a gcc -M. It's no better than a simple script with a single compile command and a wildcard. In fact it's worse due to having to force clean builds most of the time.

The CMake tutorial is relatively pain free. Anything above utterly basic can quickly become a pain in any build system, and most build systems have a special predicated view of how one should do things and it is really not in your interest to try otherwise.

MrMoo fucked around with this message at 18:21 on Sep 30, 2020

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

MrMoo posted:

That only includes dependency consumption, not generation, I expect to see a gcc -M. It's no better than a simple script with a single compile command and a wildcard. In fact it's worse due to having to force clean builds most of the time.
There's -MMD in the flags, which does that.

Why would you "have to force clean builds most of the time"? I have never had to force clean a build. Literally the only time you have to do that is if you update header files or libraries that live outside of the project, in a way that changes the build outcome, like if you do a major update to one of your dependency libraries.

And it's better than a simple script because it doesn't rebuild any object file that's already been built. Like literally the thing that was being asked for. :confused:

MrMoo
Sep 14, 2000

I think it changed because generating dependencies and compiling were different stages a long time ago, not together in one step. Not the only one surprised, although I've never gone down the sed route:

https://news.ycombinator.com/item?id=15061255

i.e. gcc -M only appeared like this ugly rear end rule as a manual pre-processing step:

code:
%.d: %.c
        @set -e; rm -f $@; \
         $(CC) -M $(CPPFLAGS) $< > $@.$$$$; \
         sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \
         rm -f $@.$$$$
to quote,

GNU posted:

With old make programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like ‘make depend’. That command would create a file depend containing all the automatically-generated prerequisites; then the makefile could use include to read them in (see Include).

Last touched a Makefile with Sun ONE Studio / Sun WorkShop / Sun Pro CC, and awful friends aCC, and xlc.

MrMoo fucked around with this message at 18:40 on Sep 30, 2020

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

MrMoo posted:

I think it changed because generating dependencies and compiling were different stages a long time ago, not together in one step. Not the only one surprised, although I've never gone down the sed route:
Ah, yeah, I was surprised by it too about five years ago when I wrote this Makefile, my previous one having been ten years ago when it still had a separate make depend step. The newer way is pretty great!

MrMoo
Sep 14, 2000

Yeah, the old way you could change a header file and only some source files would get rebuilt and you end up with a Frankenstein's executable of different source file versions. Debugging crashes becomes a nightmare.

I'd also recommend Scons as being more developer friendly, but it is not fun on Windows at the best of times, maybe under WSL less painful?

Dren
Jan 5, 2001

Pillbug

Sweeper posted:

CMake is widely used and will help you integrate third party pins, I would recommend you use it if you have a choice. Do a bit of research on modern cmake usage, it isn’t that bad and generally ends up pretty nice! It will even help you integrate test frameworks and linting (clang-tidy)

I’ve recently had to default the cmake clang-tidy integration to off. clang-tidy integration can double or triple the project build time compared to a normal build. And if you’ve got caching set up like we do you’re really shooting yourself in the foot by having it on. Best case with caching on and building from clean is fast, like seconds. But with clang-tidy integration turned on it’s going to take nearly the full build time.

clang-tidy integration also uses enough memory that we had to limit parallelization of our builds (-j4 instead of -j) to avoid the builds being killed by the OOM killer.

The approach I put in place instead is to lint only the files that have changed in the branch. This isn’t perfect, it can miss linting a change in a header if an associated translation unit didn’t also change, but it’s so much faster that it’s worth it. I set up a nightly to run the full clang-tidy integrated build.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

Dren posted:

I’ve recently had to default the cmake clang-tidy integration to off. clang-tidy integration can double or triple the project build time compared to a normal build. And if you’ve got caching set up like we do you’re really shooting yourself in the foot by having it on. Best case with caching on and building from clean is fast, like seconds. But with clang-tidy integration turned on it’s going to take nearly the full build time.

clang-tidy integration also uses enough memory that we had to limit parallelization of our builds (-j4 instead of -j) to avoid the builds being killed by the OOM killer.

The approach I put in place instead is to lint only the files that have changed in the branch. This isn’t perfect, it can miss linting a change in a header if an associated translation unit didn’t also change, but it’s so much faster that it’s worth it. I set up a nightly to run the full clang-tidy integrated build.

I generally build with linting off while making changes because of this. It is on by default in the build and we can’t push something that won’t build or lint as a backstop if I forget to rebuild. Our builds aren’t too bad though, only like 6-10 minutes for the things I usually touch

Xarn
Jun 26, 2015
Which fucker thought that making std::unique_ptr shallow-const type was a good idea?

Captain Cappy
Aug 7, 2008

It's unfortunately how all pointers work so they probably wanted to keep that symmetry.

Xarn
Jun 26, 2015
True, but you are supposed to fix other people's mistake, not compound them :v:

Adbot
ADBOT LOVES YOU

Absurd Alhazred
Mar 27, 2010

by Athanatos

Xarn posted:

Which fucker thought that making std::unique_ptr shallow-const type was a good idea?

Could you elaborate?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply