Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
matti
Mar 31, 2019

mawarannahr posted:

(lovely) docs

Adbot
ADBOT LOVES YOU

Kazinsal
Dec 13, 2011



gnu info is the worst loving thing ever to disgrace unixlikes. man pages work perfectly fine, goddammit, it's just that gnu people can't write concise manuals worth a hot poo poo and needed their own bespoke typesetting system and horrible manual browser to go along with it.

psiox
Oct 15, 2001

Babylon 5 Street Team

Jonny 290 posted:

a good post and i appreciate it

matti
Mar 31, 2019

The C implementation of info was designed as the main documentation system of GNU based operating systems and was then ported to other Unix-like operating systems. However, info files had already been in use on ITS emacs. On the TOPS-20 operating system INFO was called XINFO.[1]

ofc it is some MIT poo poo, like lot of poo poo things

Kazinsal
Dec 13, 2011



the best software thing to come out of MIT was their simplified BSD license, which was promptly supplanted by better simplified BSD licenses

mystes
May 31, 2006

I only license my software under the 3-clause bsod license.

matti
Mar 31, 2019

Kazinsal posted:

the best software thing to come out of MIT was their simplified BSD license, which was promptly supplanted by better simplified BSD licenses

zork is good also

sb hermit
Dec 13, 2016





mawarannahr posted:

I want to be able to sit down and say “this is Unix. I know this” and have the tomes of manuals I read autistically to have relevance. I used to read the x11 manuals too (a big part of why I dislike wayland, probably
) there’s nothing comparable anymore and the way things fit together don’t make sense. it’s dissonant with my idea of unix history and convention, which seemed pretty great to me. things change too fast now and it’s mostly just red hat winging it. freebsd 5.x felt like a home — linux feels like an airbnb.

I don’t like it because I don’t like it and the mental structures I built around BSD are increasingly irrelevant yet aren’t replaced by anything that makes my life better at all. it’s complicated in places I don’t need it to be complicated, and oversimplified in places i want more control. similar with how python took over everything when smalltalk, which actually makes sense and had nice books, was right there (io would have made a good replacement that works better with the Unix model). good books are important to my computing experience. now there’s just terrible man pages, README.md, and web tutorials.

i know it sounds like a joke but im serious about unix as a way of structuring the space inside a computer — i knew the building layout, how the furnace worked, how to replace the fuse and so on, and there were good blueprints and product manuals. i feel kind of lost now and need to go to (lovely) docs a lot more because i don’t know how to memorize the new stuff, it doesn’t mesh well with the old knowledge.

gently caress esr, cathedrals all the way.

I still haven't gotten around to understanding dconf or dbus.

Everything else is overwhelming and frustrating and it's a full time job staying on top of everything. But then again, it's not like old windows 7 and 98 books have any relevance now, since half of all windows configuration is via the control panel and another half is via the administrative tools and the last half uses the new metro interface.

pseudorandom name
May 6, 2007

dbus is an RPC system, it has a message bus which clients connect to, clients can optionally have well-known names that other clients can use to find them, clients can export a list of objects (in a hierarchy), and those objects can implement one or more interfaces with methods, properties, and signals. there are various tools like d-feet or qdbusviewer to poke around. note that there are multiple message buses, e.g. there's a system-wide message bus, per-user message buses, and an entirely separate message bus for the accessibility API for some reason.

dconf is a registry. but with schemas because grognards hate the registry. GSettings is an API over dconf. gconf is dconf's predecessor. it also had schemas and I have no idea why they replaced it.

sb hermit
Dec 13, 2016





well that's nice, but until I write code that uses the APIs or scripts that do something useful with them, they will continue to effectively be a mystery to me

I mean, dconf seems pretty useful to configure desktop stuff but the only thing I want to do is stop tracker from running but apparently tracker doesn't listen to any configuration settings.

I might want to use dbus to do local IPC instead of domain sockets for custom software but I haven't gotten around to it just yet.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Assuming people wanted a real explanation.

dbus is just the message dispatch design pattern implemented via sockets. Network manager is a good example of it's benefits. Before nm would listen five or six sockets that directly connected to wpa_supplicant, vpn management, etc. Any changes to those programs meant that network manager was broken until a portion of it was rewritten. Now network manager gets all of those notifications via a library that the people writing network manager don't have to maintain. They also have access to more system notifications that can impact network manager.

You can have as many instances of dbus running as you want. It's just a socket. By using one of the standard busses (kernel, system, user), you allow other software to easily get notifications from your program and you can notify them.

People criticize systemd for replacing so many system daemons. Those daemons weren't designed or written to communicate with other parts of the system. There was a lot of klunky, brittle glue. Re-implementing and replacing them was easier than modifying code that had been hacked on for 30+ years.

Someone earlier pointed out how much better network manager was now. The reasons it sucked were largely outside of the control of it's developers. It works a lot better now because of dbus and systemd, because it can communicate with the parts of the system important to it in a distro agnostic manner.

Another major benefit of systemd that people don't realize is battery life/energy efficiency. The old daemons often ran shell scripts. ACPID for instance - would read from a file in /proc, then run a shell script when an event was written. The overhead involved in launching a shell script meant that the disk might need to be woken from sleep (or kept from sleeping), and that the CPU would need to enter a higher power state for a period of time. Having this happen several times a minute adds up.

The old daemons were simple because they were counting on someone being around that knew how to fix them if they got cranky. Windows and MacOS grew up on computers where there was no separation between user and operator. From the beginning they automated as much of the operation as possible. Linux inherited a design from computers that had dedicated operators/sysadmins. That vax backplane I posted a picture of earlier - most users would have never seen the physical computers, and their terminals required no configuration. The unix philosophy was predicated on knowledge making up for simplistic software. Today, most people using Linux do not find it particularly rewarding to learn that knowledge. They also aren't getting paid for it. The expectation is that the system software needs to do as much configuration and operation as it can on it's own. Otherwise people interested in using a computer (business or pleasure) will use something else.

tl/dr - the only way for linux to work as a modern operating system is to re-architect it to do as much of the computer configuration and operation as possible. The unix philosophy is predicated on a set of pre-suppositions about computers that no longer hold for the bulk of computer users.

SYSV Fanfic fucked around with this message at 09:05 on Oct 30, 2021

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

SYSV Fanfic posted:

Windows and MacOS grew up on computers where there was no separation between user and operator. From the beginning they automated as much of the operation as possible. Linux inherited a design from computers that had dedicated operators/sysadmins.

I figured something similar in a different context when I noticed how overload the term "security" is in software, and compared my Linux desktop to my Lineage phone.

Classic Linux security is all about user ID and group ID and ACLs tied to them. The source of potential security violations is the user, the naughty student on a terminal trying to hack his grades, or the employee trying to hack payroll. Programs are trusted to do anything the user can do, because the sysadmins are assumed to know what they're doing. So you get stuff like, IIRC, NFS servers trusting the client's declared UIDs at face value, because it's assumed that a malicious NFS client simply will not be allowed to be installed. Vice versa, the idea of a lowly user not trusting the applications he runs with full access to his ~ is kind of odd, at best.

Android mostly flips this around. The human user is God and by his word any operation can be authorized (or almost any, in the case of mainstream non-rooted phones). ACLs are tied not to users (guest users exist but they're a marginal feature), but to applications, which are untrusted by default and do not inherit the user's (unlimited) permissions. We all know why, because app stores are full of crap and you can't expect a "sysadmin" to verify them, not even if he's a

A single-user desktop or laptop is much closer to an Android device than to a 1970s time share mainframe with user terminals. Hence Docker, Flatpak, Snap, etc... all trying to provide a similar experience to Android with random untrusted apps running in isolation, creating in effect an alternative security system built around chroot and friends.

Now let's see the folks who actually know Linux tell me how completely off-base all of this is.

SYSV Fanfic
Sep 9, 2003

by Pragmatica

NihilCredo posted:

Now let's see the folks who actually know Linux tell me how completely off-base all of this is.

Seems based to me. The reason android, mac, windows, etc use the concept of "users" is because their security model was borrowed from multi-user systems. What they're really trying to accomplish is segregating trusted programs from untrsuted programs.

The Windows 98 and mac os9 malware/vrius problem was almost comical. They grew up in an environment where the user was the gatekeeper of what programs executed on their computer. Their security model assumed that if code was running on the system, the user inserted a disc and did something to put it there. It broke horrifically when that assumption changed and literally anyone was allowed to run code on your computer via a browser.

Bootsector viruses were such an exception they added guard code in most bioses. You'd get them from leaving an infected floppy in the drive when you powered it on.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
One thing I'd critique. Containers are trying to solve the dependency problem, which is caused by another legacy design decision. Shared libraries were originally really important. They saved disk space, and on systems that supported it, they saved memory. You paid for these savings by spending time upfront figuring out which version of the library (and it's dependencies) were going to be on a system.

Ram and disc space is cheap compared to the number of man hours every developer has to spend to make sure their software runs on a bunch of different distributions (and now operating systems. I assume wsl will add container support). If the software doesn't work, it's a nightmare for users (gog linux games are the worst for this personally).

Packaging everything together in a single file removes the bulk of this. You just have to check if the kernel is new enough, and if there are system dependencies you can query via the container API or dbus.

If you want to understand the ram savings of shared libraries, it's because different programs can share sections of read only memory amongst themselves with no modification or synchronization. Code is read only, so once a shared library is loaded, the operating system can arrange for other programs that request it to share the same region of memory where the code is loaded.

edit: If anyone isn't clear on what containers are - they can be thought of as lightweight virtual machines. Rather than divide a machine up into a bunch of totally separated virtual machines, you can save the overhead of running ten copies of the kernel and everything through a network socket by having the virtual machines share a single kernel. The kernel enforces isolation instead of a hypervisor. It's a huge win as it allows machine resources to be scheduled and utilized with much less waste. Conceptually it's kind of the same as "lightweight processes" aka threads.

SYSV Fanfic fucked around with this message at 11:07 on Oct 30, 2021

Truga
May 4, 2014
Lipstick Apathy

SYSV Fanfic posted:

Ram and disc space is cheap compared to the number of man hours

not that i entirely disagree in this case, but OTOH this mentality is also why your pc now needs 16 billion rams to display some loving cat pictures, and you're just offloading a part of your revenue to samsung because instead of paying a higher product price, customer needs to buy more ram/ssd :v:

SYSV Fanfic
Sep 9, 2003

by Pragmatica

Truga posted:

not that i entirely disagree in this case, but OTOH this mentality is also why your pc now needs 16 billion rams to display some loving cat pictures, and you're just offloading a part of your revenue to samsung because instead of paying a higher product price, customer needs to buy more ram/ssd :v:

The ram usage overwhelmingly comes from multimedia, assets, and regions of memory that can't be shared. One of the reasons browsers use so much memory (before even loading a page) now is that tabs need to be totally isolated from one another. Orders and orders of magnitude more usage.

BlankSystemDaemon
Mar 13, 2009



Let's not kid ourselves, the commodification of compute and storage, both main and auxiliary, has absolutely caused developers to become less concerned with being memory efficient - browsers (and a lot of other things) use a lot of memory because systems have a lot of memory, and because they need to isolate everything.

If there's one thing I wish people would learn from the past, it's the lessons of the UNIX wars - namely that there shouldn't be a very small number of people that gets to dictate how everyone should do it.
Unfortunately, that's exactly how it is now.

SYSV Fanfic
Sep 9, 2003

by Pragmatica

BlankSystemDaemon posted:

Let's not kid ourselves, the commodification of compute and storage, both main and auxiliary, has absolutely caused developers to become less concerned with being memory efficient - browsers (and a lot of other things) use a lot of memory because systems have a lot of memory, and because they need to isolate everything.

It's not just software.

The 8086/8088 processor chose to use variable size machine instructions like the z80 (all 8080 instructions were single byte). This allowed machine language programs to be far more compact when stored in ram (which was expensive). If they made all instructions two or four bytes in size, it was conceivable that half a program would be wasted space. This greatly complicated fetching instructions from memory.

When acorn designed ARM, ram was a lot cheaper. They could have done RISC with variable length instructions but they didn't have to. All arm instructions are 4 or 8 bytes wide, which simplifies things greatly. It also wastes a *ton* of memory space in 64 bit arm for instructions that only use the registers. You could encode a logical OR between two registers in two bytes, but you still have to pad it out to 8 bytes for it to be a valid instruction.

BlankSystemDaemon posted:

If there's one thing I wish people would learn from the past, it's the lessons of the UNIX wars - namely that there shouldn't be a very small number of people that gets to dictate how everyone should do it.
Unfortunately, that's exactly how it is now.

Debian is run democratically. There wasn't a secret cabal that rammed systemd or the /usr/bin and /bin merge down people's throats. The majority of people who care enough to maintain the project voted for it.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Last barf in my posting spew for today.

IMO much of the resistance to systemd isn't and wasn't about the politics of who was going to have to do work and who was going to have to learn. It was really about the incredibly lovely ageism that exists in tech employment. If what you know becomes irrelevant, you've got to learn something else. If it changes enough that you need to find a new job, it's dramatically worse once you're over 40, and you're career is pretty much over if you're 50+.

Overwhelmingly the feelings about systemd revolved around employability more than they did technical merit.

The level of income you get from a tech job cannot be easily replaced. Especially when you have kids in college and are close to finishing off your mortgage. A long time ago I was employed by a fortune 500 on a development team that had a ton of people supporting the product in parallel. The older employees got totally hosed, and then the recession hit.

Being in tech is like living in the utopia in logan's run. Make sure you are saving - not just for retirement, but savings/investments you can tap to offset lost income before you can legally withdraw from your 401k/IRA. Get your house paid off. Count on finishing off your working years with the level of income you'd get from a help desk.

Edit: The mainframe guys I know that stayed (mainframe's dooomeeeddd) the course are living like kings. IBM bought Humana's mainframe division in part to get a senior architect and his entire team b/c he's one of the best still living and you've never even heard of him.

SYSV Fanfic fucked around with this message at 12:30 on Oct 30, 2021

mystes
May 31, 2006

Interesting take on the situation. I'm not convinced systemd really changes that much or is that significant philosophically though. However perhaps the people who are against it feel that it does/is.

mystes fucked around with this message at 12:39 on Oct 30, 2021

BlankSystemDaemon
Mar 13, 2009



mystes posted:

Interesting take on the situation. I'm not convinced systemd really changes that much or is that significant philosophically though. However perhaps the people who are against it feel that it does/is.
Nope, we're not allowed to have any opinions.

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man

mystes posted:

Interesting take on the situation. I'm not convinced systemd really changes that much or is that significant philosophically though. However perhaps the people who are against it feel that it does/is.

systemd-the-work-scheduler is a little annoying to work with imo because what you're actually doing is writing constraints for a graph optimizer in a way that's sometimes hard to understand and it has surprisingly few "no please just put it exactly here" types of escape hatches. plus the docs for exactly what goes in a unit file are scattered all over the place, it's a little inconsistent about what goes in a unit file and what is done with a specially formatted unit file name, it can be in general kind of tough to go from having a pretty good concept of what you want something to do, to writing the unit files that do that exact thing.

the rest of the capabilities that systemd gives you though are really nice and i can't imagine trying to do some hacky bullshit in terrible shell scripts with upstart and proxies through weird little binaries like inetd. systemd created a lot of churn in that now your accumulated knowledge about how to wire together 312312512312312 little utilities made in the 70s and never questioned since is finally not useful anymore, but it's good churn because they loving sucked.

SYSV Fanfic
Sep 9, 2003

by Pragmatica
perl one liners is where it's at.

Kazinsal
Dec 13, 2011



the average system purchased today is going to have 8 gigs of ram. in normal people terms this is "what's RAM, is this one of those nerd things you people talk about when planning how to make my facebook look different" and completely irrelevant except for the fact that it keeps their Big Dingus Slots Max By Zynga Games running nicely, and in grognard terms this is enough to have NAT tables for every single private address space IP address to have 20 simultaneous connections all at once and still have a whopping two megabytes left for the OS to NAT poo poo based on that, or to prefetch one tenth of the average AAA video game, whatever geeks you out the most. or about a billion average-length x86 instructions.

gently caress code size, asset size, whatever. memory is expendable but time isn't, so gently caress optimizing for size. optimize for speed.

Sapozhnik
Jan 2, 2005

Nap Ghost

SYSV Fanfic posted:

It's not just software.

The 8086/8088 processor chose to use variable size machine instructions like the z80 (all 8080 instructions were single byte). This allowed machine language programs to be far more compact when stored in ram (which was expensive). If they made all instructions two or four bytes in size, it was conceivable that half a program would be wasted space. This greatly complicated fetching instructions from memory.

When acorn designed ARM, ram was a lot cheaper. They could have done RISC with variable length instructions but they didn't have to. All arm instructions are 4 or 8 bytes wide, which simplifies things greatly. It also wastes a *ton* of memory space in 64 bit arm for instructions that only use the registers. You could encode a logical OR between two registers in two bytes, but you still have to pad it out to 8 bytes for it to be a valid instruction.

Debian is run democratically. There wasn't a secret cabal that rammed systemd or the /usr/bin and /bin merge down people's throats. The majority of people who care enough to maintain the project voted for it.

OK but L1 cache is expensive and if you have an actual pure RISC arch then that L1 is going mostly going to fill up with worthless register save/restore prolog/epilog for each function that contributes no useful value. All legacy-free architectures these days use variable-length instruction encoding and have compact instructions for stack save/restore, to the extent that RISC as a design philosophy still exists it just means you can't do ALU ops that directly reference memory with complicated indexing schemes.

I think the main thing driving the development of RISC was not memory getting cheaper but memory getting faster, to the point where the CPUs were briefly a bottleneck, although this was a very brief historical fluke. You also got things like transputers popping up around that time because single-core throughput was hitting a brick wall, but then superscalar CPUs came along and upended everything. I'm a bit fuzzy on the details though so this entire paragraph is probably complete bullshit.

x86 continues to be relevant because even though it has hundreds of dreck instructions like AAA or whatever, the actual parts that people use turn out to be a pretty good Huffman coding for practical software.

Truga
May 4, 2014
Lipstick Apathy

Kazinsal posted:

the average system purchased today is going to have 8 gigs of ram. in normal people terms this is "what's RAM, is this one of those nerd things you people talk about when planning how to make my facebook look different" and completely irrelevant except for the fact that it keeps their Big Dingus Slots Max By Zynga Games running nicely, and in grognard terms this is enough to have NAT tables for every single private address space IP address to have 20 simultaneous connections all at once and still have a whopping two megabytes left for the OS to NAT poo poo based on that, or to prefetch one tenth of the average AAA video game, whatever geeks you out the most. or about a billion average-length x86 instructions.

gently caress code size, asset size, whatever. memory is expendable but time isn't, so gently caress optimizing for size. optimize for speed.

yeah but see, now every single app is a separate crome tab that takes half a gig of ram instead of 30mb, so now you suddenly need 16gb because you have a wiki open and discord running behind your vidya game. i have a friend who constantly complained about stutters when playing games until he finally admitted he only has 8 gigs of ram in tyool 2019 and obviously many games don't run fine inside of the remaining 2gb he had after the OS/apps anymore lol

it's been probably 5 years since i've started recommending at least 16gb ram for anything more than just browsing, and especially for work PCs, because less gets absolutely lovely to use, and because everyone now works on 16gb+ pcs your average 8gb machine user gets royally hosed since it doesn't get tested much.

not that that's in any way different to how it was in the now distant past when PCs were getting twice as fast every other month and anything you bought was obsolete in a year, it's just another but very similar kind of lovely, but at least back in those days when you bought a 5 times faster cpu after 3 years you got the speed boost, now you're just buying more ram to have things run ever slower unless you use 3rd party clients and copious amounts of adblocking, lmao

it's like that joke about disk size and how data inevitably grows to fill it, but with chome tabs and ram

feedmegin
Jul 30, 2008

SYSV Fanfic posted:

When acorn designed ARM, ram was a lot cheaper. They could have done RISC with variable length instructions but they didn't have to. All arm instructions are 4 or 8 bytes wide, which simplifies things greatly.

Um no. Regular AArch32 and AArch64 instructions are both 4 bytes wide. Thumb(2) is a mix of 2 and 4 byte instructions in the same instruction stream which sure sounds like variable length instructions to me and has worked well enough for reducing code size for literal decades now including, for example, for the CortexM-0 with a whole 16k of SRAM I used to write firmware for. 64 bit registers does not mean 64 bit instructions!

No early RISC processor went with variable length instructions, not because of more memory, but because the REDUCED part of RISC mattered back then in terms of transistor count. A variable-length instruction fetcher is more complicated to implement especially when you have to start thinking about eg crossing cache lines. ARM in particular was initially designed to a VERY tight transistor budget since it wasn't going into fancy rear end workstations but rather PC-grade and then early-PDA-grade hardware.

feedmegin
Jul 30, 2008

Sapozhnik posted:

All legacy-free architectures these days use variable-length instruction encoding

TELL me about the variable-length instructions in 64-bit ARM, which by the way despite what you might think from the name was designed to be legacy-free (no barrel shifter, no conditional flags on every instruction, for example).

hobbesmaster
Jan 28, 2008

is arm without a barrel shifter even really arm

feedmegin
Jul 30, 2008

hobbesmaster posted:

is arm without a barrel shifter even really arm

Not to my mind :ohdear: but that's what they went with.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
some truly impressive ignorant blathering itt

mycophobia
May 7, 2008

rjmccall posted:

some truly impressive ignorant blathering itt

oh yeah?

BlankSystemDaemon
Mar 13, 2009



Kazinsal posted:

gently caress code size, asset size, whatever. memory is expendable but time isn't, so gently caress optimizing for size. optimize for speed.
The irony of the optimisation for speed that you're claiming has happened, though, is that that every single developer treating storage like a commodity has meant that, for example the average pageload on the top-10000 most visited websites is now many times slower than it was a decade ago, despite the fact that the average internet speed has similarly risen during that time.
And the same goes for video games, they take longer and longer to load, than they ever have, and all they manage is to get closer to the uncanny valley.

As for ARM, I think the instruction to optimise javascript means it might not be a RISC architecture anymore, no matter what it says on the tin.

BlankSystemDaemon fucked around with this message at 19:26 on Oct 30, 2021

pseudorandom name
May 6, 2007

BlankSystemDaemon posted:

And the same goes for video games, they take longer and longer to load, than they ever have, and all they manage is to get closer to the uncanny valley.

Non-technical video game nerds keep going on and on about how SSDs are going to change everything and how consoles can now replace the entire contents of RAM from the SSD faster than you can spin the camera around the scene and how DirectStorage will be revolutionary on Windows 11 and I keep attempting to explain to them that load times are a business decision, not a technical one, and while Naughty Dog might be willing to devote multiple man years to making the loads instant and invisible, the only thing that SSDs will do for most games is allow the developer to spend even less time optimizing their level design and engine performance and whatnot.

Zlodo
Nov 25, 2006

pseudorandom name posted:

load times are a business decision, not a technical one,

it is absolutely a technical decision (which is in turn influenced by design decisions, themselves influenced by business) but it is transverse to everything else, which makes it absolutely non trivial to deal with in large aaa games

like almost everything that everyone does in every team working on the game has the potential to impact loading times, and it has to be balanced with "let's make the game not look like rear end" and with "let's make the game also still run on the older, shittier consoles" to a greater extent than before given that it doesn't look like the ps5/xbox series are going to displace the xbone/ps4 quite as fast as a new console generation usually do

spiritual bypass
Feb 19, 2008

Grimey Drawer

sb hermit posted:

well that's nice, but until I write code that uses the APIs or scripts that do something useful with them, they will continue to effectively be a mystery to me

I mean, dconf seems pretty useful to configure desktop stuff but the only thing I want to do is stop tracker from running but apparently tracker doesn't listen to any configuration settings.

I might want to use dbus to do local IPC instead of domain sockets for custom software but I haven't gotten around to it just yet.

just wait 5 minutes after x startup for it to stop consuming all of your disk io

SYSV Fanfic
Sep 9, 2003

by Pragmatica

feedmegin posted:

Um no. Regular AArch32 and AArch64 instructions are both 4 bytes wide. Thumb(2) is a mix of 2 and 4 byte instructions in the same instruction stream which sure sounds like variable length instructions to me and has worked well enough for reducing code size for literal decades now including, for example, for the CortexM-0 with a whole 16k of SRAM I used to write firmware for. 64 bit registers does not mean 64 bit instructions!

You're right except thumb was added later. Original arm was 4 bytes.


feedmegin posted:

No early RISC processor went with variable length instructions, not because of more memory, but because the REDUCED part of RISC mattered back then in terms of transistor count. A variable-length instruction fetcher is more complicated to implement especially when you have to start thinking about eg crossing cache lines. ARM in particular was initially designed to a VERY tight transistor budget since it wasn't going into fancy rear end workstations but rather PC-grade and then early-PDA-grade hardware.


My main point was that by 1985 chip designers were shrugging their shoulders and telling people to buy more ram. Even in the home PC market segment that acorn was entering. The only reason they would have done that was memory efficiency and it didn't matter.

pseudorandom name
May 6, 2007

Zlodo posted:

it is absolutely a technical decision (which is in turn influenced by design decisions, themselves influenced by business) but it is transverse to everything else, which makes it absolutely non trivial to deal with in large aaa games

like almost everything that everyone does in every team working on the game has the potential to impact loading times, and it has to be balanced with "let's make the game not look like rear end" and with "let's make the game also still run on the older, shittier consoles" to a greater extent than before given that it doesn't look like the ps5/xbox series are going to displace the xbone/ps4 quite as fast as a new console generation usually do

you just listed a bunch of business reasons why developers don't waste money on technical effort

SYSV Fanfic
Sep 9, 2003

by Pragmatica
Anyone interested in PC paleontology - https://archive.org/details/byte-magazine. You can most def get some cool pictures for the pictures thread if nothing else.

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

SYSV Fanfic posted:

You're right except thumb was added later. Original arm was 4 bytes.

I am well aware of that thank you. The ARM7TDMI came out in 1994. Decades ago as I said. I would suggest the guy talking about 8 byte ARM instructions is perhaps just a little out of their depth here.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply