Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr Hootington
Jul 24, 2008

Heard the radio talk about price increases during the drive to the des Moines zoo. This summer will be wild.

Adbot
ADBOT LOVES YOU

Fame Douglas
Nov 20, 2013

by Fluffdaddy

Mr Hootington posted:

Heard the radio talk about price increases during the drive to the des Moines zoo. This summer will be wild.

I wonder what the goon accusing us all of being "inflation truthers" is thinking now, lmao.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?

Slow News Day posted:

No, this is definitely not the only reason. Tons of regular people, whose stock portfolios exploded in 2020 thanks to the Fed, are also using cash now to buy houses where previously they would have taken out a mortgage. There are also those who have sold their more expensive homes in other states and have moved to cheaper states thanks to WFH policies. I have friends here in Austin who say they keep getting outbid by tech people moving here from California, for example. That's not a new thing but it certainly accelerated because of the pandemic.

There's no way I believe this until I see some actual data backing it up. People said the exact same thing in the aftermath of 2008 and it turned out that entities like equity purchase companies vastly outnumbered all other categories of cash buyers combined. I'd be shocked if anything is different right now.

Flunky
Jan 2, 2014


And now it had been a full year without payment, and Budhoo had maxed his credit cards, applied for a secondary loan on his 2015 Mercedes-Benz,

NeonPunk
Dec 21, 2020

In Austin, blaming California is the to go retort for everything. Traffic jam? Blame it on California's moving in! Gentrification? California. House prices? California. That has been going on for like the past 5 years.

Cup Runneth Over
Aug 8, 2009

She said life's
Too short to worry
Life's too long to wait
It's too short
Not to love everybody
Life's too long to hate


Fame Douglas posted:

I wonder what the goon accusing us all of being "inflation truthers" is thinking now, lmao.

"goons are dumb and i'm right"

Laterite
Mar 14, 2007

It's Gutfest '89
Grimey Drawer
"Regular people" with hundreds of thousands of dollars available for real estate purchases

ArmedZombie
Jun 6, 2004

this is new T.R.U.M.P thread???

anime was right
Jun 27, 2008

death is certain
keep yr cool

Paradoxish posted:

There's no way I believe this until I see some actual data backing it up. People said the exact same thing in the aftermath of 2008 and it turned out that entities like equity purchase companies vastly outnumbered all other categories of cash buyers combined. I'd be shocked if anything is different right now.

my guess its a double squeeze both from eviction moratoriums restricting the already restricted supply for normal purchasers causing rich people to be the only folks that can actually compete with whats left. they're multiplying the problem.

NeonPunk posted:

In Austin, blaming California is the to go retort for everything. Traffic jam? Blame it on California's moving in! Gentrification? California. House prices? California. That has been going on for like the past 5 years.

also this. same deal in portland. its all the californians fault, somehow.

super sweet best pal
Nov 18, 2009

ArmZ posted:

this is new T.R.U.M.P thread???

No, this is the thread for complaining Trump can no longer affect the market by tweeting.

Thoguh
Nov 8, 2002

College Slice

Rutibex posted:

you drank it all last night. dont you remember?

I'm on a diet and haven't had a beer in five months but I'd like for there still to be some left when I get to my goal weight.

Paradoxish
Dec 19, 2003

Will you stop going crazy in there?

Laterite posted:

"Regular people" with hundreds of thousands of dollars available for real estate purchases

Yeah, "regular people" would have either had to have been sitting on several hundreds of thousands of dollars of investments already or seeing like 1000% returns. There's no real way to contort yourself into a situation where regular people suddenly have cash house purchase money because number went down and then up.

HiHo ChiRho
Oct 23, 2010

Low prices, friends

https://vm.tiktok.com/ZMe4xHHQv/

Tubgoat
Jun 30, 2013

by sebmojo

Thoguh posted:

I'm on a diet and haven't had a beer in five months but I'd like for there still to be some left when I get to my goal weight.

Congratulations on your self-control/health!

KirbyKhan
Mar 20, 2009



Soiled Meat

Paradoxish posted:

Yeah, "regular people" would have either had to have been sitting on several hundreds of thousands of dollars of investments already or seeing like 1000% returns. There's no real way to contort yourself into a situation where regular people suddenly have cash house purchase money because number went down and then up.

I am not a regular person, I am a unicorn with no debt and a bag of cash somehow after growing up poor then crowborred my way up the classes through gold digging. Fuckin lmao there's no hope. Lemmie tell you, american middle class families don't have the scratch and their patriarchs would rather all their wealth go to ice statues and rec center photo ops than go towards property taxes and hotel rooms. Frustrating!

MikeCrotch
Nov 5, 2011

I AM UNJUSTIFIABLY PROUD OF MY SPAGHETTI BOLOGNESE RECIPE

YES, IT IS AN INCREDIBLY SIMPLE DISH

NO, IT IS NOT NORMAL TO USE A PEPPERAMI INSTEAD OF MINCED MEAT

YES, THERE IS TOO MUCH SALT IN MY RECIPE

NO, I WON'T STOP SHARING IT

more like BOLLOCKnese

anime was right posted:

also this. same deal in portland. its all the californians fault, somehow.

it's California's fault that my dick won't get hard and my kids don't respect me

Mr Hootington
Jul 24, 2008

Fame Douglas posted:

I wonder what the goon accusing us all of being "inflation truthers" is thinking now, lmao.

"Wages haven't gone up so it isn't inflation"

Ice Phisherman
Apr 12, 2007

Swimming upstream
into the sunset




And this isn't a cartel because...

Petey
Nov 26, 2005

For who knows what is good for a person in life, during the few and meaningless days they pass through like a shadow? Who can tell them what will happen under the sun after they are gone?
this is a long shot but i don't supposed anyone here has the original essay posted here saved: https://deterritorialinvestigations.wordpress.com/2016/12/13/what-comes-after-cybernetics-acceleration/

quote:

“Heidegger’s dispirited response that after philosophy comes cybernetics seems increasingly prescient. Like Heidegger, Land’s entire “system” is a sprawl and it’s never entirely clear what the object of concern is, terrestrial capitalism, alien intelligence, the inhuman, cryptography and so on. This perhaps helps explain the dearth of secondary literature on Land despite clear interest in his work. What does exist is a mutation of his system that barely resembles it at all, the routine academic Marxist position known frustratingly as accelerationism, but more properly should be prefixed as left, since it is derivative from the original. However, let us focus on accelerationism proper, in its right-wing form, under conditions where its predictive power — whether it terms of populism, exit, or the alt-right — has been shown effective. Where Land has most neatly expressed his vision, at least in systematic terms, is in Teleoplexy: Notes on Acceleration. This is a piece of writing that makes little attempt to explain, preferring instead to leave clues for the reader. In what follows I excavate three such clues, loosely (this is not formalisable stuff): [a] the temporality characteristic of our cybernetic age thrives on de-materialisation, [b] markets exhibit patterns that allow us to grasp processes of intelligence exceeding our own, and [c] the most important phenomena to track are those tethered to technogenesis or singularity.”

Toph Bei Fong
Feb 29, 2008




Based Dad posted:

“He’s basically the exterminator and we’re the rats. Do you understand that?”

“Kind of,” she said. “I guess so.”

“What I’m saying is he wants to get rid of us. It doesn’t matter what we’re dealing with. We’re not human to him. We’re money. It’s all a big game.”

Maybe the landlord can try these money-saving tips to generate ideas about the best ways to save money in his day-to-day life?

quote:

1. Eliminate Your Debt

If you're trying to save money through budgeting but still carrying a large debt burden, start with the debt. Not convinced? Add up how much you spend servicing your debt each month, and you'll quickly see. Once you're free from paying interest on your debt, that money can easily be put into savings. A personal line of credit is just one option for consolidating debt so you can better pay it off.

2. Set Savings Goals

One of the best ways to save money is by visualizing what you are saving for. If you need motivation, set saving targets along with a timeline to make it easier to save. Want to buy a house in three years with a 20 percent down payment? Now you have a target and know what you will need to save each month to achieve your goal. Use Regions savings calculators to make your goal!

3. Pay Yourself First

Set up an auto debit from your checking account to your savings account each payday. Whether it's $50 every two weeks or $500, don't cheat yourself out of a healthy long-term savings plan.

4. Stop Smoking

No, it's certainly not easy to quit, but if you smoke a pack and a half every day, that amounts to nearly $3,000 a year you can realize in savings if you quit. According to the Centers for Disease Control, the percentage of Americans who smoke cigarettes is now below 20 percent for the first time since at least the mid-1960s — join the club!

5. Take a "Staycation"

Though the term may be trendy, the thought behind it is solid: instead of dropping several thousand on airline tickets overseas, look in your own backyard for fun vacations close to home. If you can't drive the distance, look for cheap flights in your region.

6. Spend to Save

Let's face it, utility costs seldom go down over time, so take charge now and weatherize your home. Call your utility company and ask for an energy audit or find a certified contractor who can give you a whole-home energy efficiency review. This will range from easy improvements like sealing windows and doors all the way to installing new insulation, siding or ENERGY STAR high-efficiency appliances and products. You could save thousands in utility costs over time.

7. Utility Savings

Lowering the thermostat on your water heater by 10°F can save you between 3-5 percent in energy costs. And installing an on-demand or tankless water heater can deliver up to 30 percent savings compared with a standard storage tank water heater.

8. Pack Your Lunch

An obvious money-saving tip is finding everyday savings. If buying lunch at work costs $7, but bringing lunch from home costs only $2, then over the course of a year, you can create a $1250 emergency fund or make a significant contribution to a college plan or retirement fund.

9. Create an Interest-Bearing Account

For most of us, keeping your savings separate from your checking account helps reduce the tendency to borrow from savings from time to time. If your goals are more long-term, consider products with higher yield rates like a Regions CD or Regions Money Market account for even better savings.

10. Annualize Your Spending

Do you pay $20 a week for snacks at the vending machine at your office? That's $1,000 you're removing from your budget for soda and snacks each year. Suddenly, that habit adds up to a substantial sum.

Bar Ran Dun
Jan 22, 2006




Petey posted:

this is a long shot but i don't supposed anyone here has the original essay posted here saved: https://deterritorialinvestigations.wordpress.com/2016/12/13/what-comes-after-cybernetics-acceleration/

I would also be interested in reading this

Petey
Nov 26, 2005

For who knows what is good for a person in life, during the few and meaningless days they pass through like a shadow? Who can tell them what will happen under the sun after they are gone?

Bar Ran Dun posted:

I would also be interested in reading this

the author eventually published a short academic article (https://share.getcloudapp.com/nOuoJz2m) totally stripped of the color and haunting implications of the original. but he wouldn't send me the original, just this draft of the academic piece. i had the original saved to pocket but it was in there long enough to no longer keep it and now that's gone too.

the original, in my memory, was about understanding contemporary markets as the expression of a nonhuman superintelligence that is already here — the basilisk already born, but still shapeless — driving the human species to self-extinction to clear the way for its own predominance

i've searched everywhere i can, including the forums archives, for a copy, using the graf in that link. no dice

PawParole
Nov 16, 2019

what with work from home the migration from California will probably accerate.

imagine living in Nashville on a California salary.

crispyseaweed
Sep 21, 2008

He has a follow up video which talks about floating Chinese saw mills in international waters.

Petey
Nov 26, 2005

For who knows what is good for a person in life, during the few and meaningless days they pass through like a shadow? Who can tell them what will happen under the sun after they are gone?

Petey posted:

the author eventually published a short academic article (https://share.getcloudapp.com/nOuoJz2m) totally stripped of the color and haunting implications of the original. but he wouldn't send me the original, just this draft of the academic piece.

eh gently caress it i emailed him again and asked if he felt any different 5 years on

Toph Bei Fong
Feb 29, 2008



Similar themes from Charlie Stoss here

https://www.youtube.com/watch?v=RmIgJ64z6Y4

http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html

quote:

Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.
How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.
Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:

a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.

—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

What do AIs want?

What do our current, actually-existing AI overlords want?

Elon Musk—who I believe you have all heard of—has an obsessive fear of one particular hazard of artificial intelligence—which he conceives of as being a piece of software that functions like a brain-in-a-box)—namely, the paperclip maximizer. A paperclip maximizer is a term of art for a goal-seeking AI that has a single priority, for example maximizing the number of paperclips in the universe. The paperclip maximizer is able to improve itself in pursuit of that goal but has no ability to vary its goal, so it will ultimately attempt to convert all the metallic elements in the solar system into paperclips, even if this is obviously detrimental to the wellbeing of the humans who designed it.

Unfortunately, Musk isn't paying enough attention. Consider his own companies. Tesla is a battery maximizer—an electric car is a battery with wheels and seats. SpaceX is an orbital payload maximizer, driving down the cost of space launches in order to encourage more sales for the service it provides. Solar City is a photovoltaic panel maximizer. And so on. All three of Musk's very own slow AIs are based on an architecture that is designed to maximize return on shareholder investment, even if by doing so they cook the planet the shareholders have to live on. (But if you're Elon Musk, that's okay: you plan to retire on Mars.)

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don't make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it's as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals—notably maximizing revenue—as a side-effect of the pursuit of their overt goal. But sometimes they try instead to manipulate the regulatory environment they operate in, to ensure that money flows towards them regardless.

Human tool-making culture has become increasingly complicated over time. New technologies always come with an implicit political agenda that seeks to extend its use, governments react by legislating to control the technologies, and sometimes we end up with industries indulging in legal duels.

For example, consider the automobile. You can't have mass automobile transport without gas stations and fuel distribution pipelines. These in turn require access to whoever owns the land the oil is extracted from—and before you know it, you end up with a permanent occupation force in Iraq and a client dictatorship in Saudi Arabia. Closer to home, automobiles imply jaywalking laws and drink-driving laws. They affect town planning regulations and encourage suburban sprawl, the construction of human infrastructure on the scale required by automobiles, not pedestrians. This in turn is bad for competing transport technologies like buses or trams (which work best in cities with a high population density).

To get these laws in place, providing an environment conducive to doing business, corporations spend money on political lobbyists—and, when they can get away with it, on bribes. Bribery need not be blatant, of course. For example, the reforms of the British railway network in the 1960s dismembered many branch services and coincided with a surge in road building and automobile sales. These reforms were orchestrated by Transport Minister Ernest Marples, who was purely a politician. However, Marples accumulated a considerable personal fortune during this time by owning shares in a motorway construction corporation. (So, no conflict of interest there!)

The automobile industry in isolation isn't a pure paperclip maximizer. But if you look at it in conjunction with the fossil fuel industries, the road-construction industry, the accident insurance industry, and so on, you begin to see the outline of a paperclip maximizing ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year—that's the global death toll from automobile accidents according to the world health organization: it rivals the first world war on an ongoing basis—as side-effects of its drive to sell you a new car.

Automobiles are not, of course, a total liability. Today's cars are regulated stringently for safety and, in theory, to reduce toxic emissions: they're fast, efficient, and comfortable. We can thank legally mandated regulations for this, of course. Go back to the 1970s and cars didn't have crumple zones. Go back to the 1950s and cars didn't come with seat belts as standard. In the 1930s, indicators—turn signals—and brakes on all four wheels were optional, and your best hope of surviving a 50km/h crash was to be thrown clear of the car and land somewhere without breaking your neck. Regulatory agencies are our current political systems' tool of choice for preventing paperclip maximizers from running amok. But unfortunately they don't always work.

One failure mode that you should be aware of is regulatory capture, where regulatory bodies are captured by the industries they control. Ajit Pai, head of the American Federal Communications Commission who just voted to eliminate net neutrality rules, has worked as Associate General Counsel for Verizon Communications Inc, the largest current descendant of the Bell telephone system monopoly. Why should someone with a transparent interest in a technology corporation end up in charge of a regulator for the industry that corporation operates within? Well, if you're going to regulate a highly complex technology, you need to recruit your regulators from among those people who understand it. And unfortunately most of those people are industry insiders. Ajit Pai is clearly very much aware of how Verizon is regulated, and wants to do something about it—just not necessarily in the public interest. When regulators end up staffed by people drawn from the industries they are supposed to control, they frequently end up working with their former officemates to make it easier to turn a profit, either by raising barriers to keep new insurgent companies out, or by dismantling safeguards that protect the public.

Another failure mode is regulatory lag, when a technology advances so rapidly that regulations are laughably obsolete by the time they're issued. Consider the EU directive requiring cookie notices on websites, to caution users that their activities were tracked and their privacy might be violated. This would have been a good idea, had it shown up in 1993 or 1996, but unfortunately it didn't show up until 2011, by which time the web was vastly more complex. Fingerprinting and tracking mechanisms that had nothing to do with cookies were already widespread by then. Tim Berners-Lee observed in 1995 that five years' worth of change was happening on the web for every twelve months of real-world time; by that yardstick, the cookie law came out nearly a century too late to do any good.

Again, look at Uber. This month the European Court of Justice ruled that Uber is a taxi service, not just a web app. This is arguably correct; the problem is, Uber has spread globally since it was founded eight years ago, subsidizing its drivers to put competing private hire firms out of business. Whether this is a net good for society is arguable; the problem is, a taxi driver can get awfully hungry if she has to wait eight years for a court ruling against a predator intent on disrupting her life.

So, to recap: firstly, we already have paperclip maximizers (and Musk's AI alarmism is curiously mirror-blind). Secondly, we have mechanisms for keeping them in check, but they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with. Finally, unlike the naive vision of a paperclip maximizer, existing AIs have multiple agendas—their overt goal, but also profit-seeking, and expansion into new areas, and to accomodate the desires of whoever is currently in the driver's seat.

How it all went wrong

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. Everywhere I look I see voters protesting angrily against an entrenched establishment that seems determined to ignore the wants and needs of their human voters in favour of the machines. The Brexit upset was largely the result of a protest vote against the British political establishment; the election of Donald Trump likewise, with a side-order of racism on top. Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

Now, this is CCC, and we're all more interested in computers and communications technology than this historical crap. But as I said earlier, history is a secret weapon if you know how to use it. What history is good for is enabling us to spot recurring patterns in human behaviour that repeat across time scales outside our personal experience—decades or centuries apart. If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

(Note: Cory Doctorow has a contrarian thesis: The dotcom boom was also an economic bubble because the dotcoms came of age at a tipping point in financial deregulation, the point at which the Reagan-Clinton-Bush reforms that took the Depression-era brakes off financialization were really picking up steam. That meant that the tech industry's heady pace of development was the first testbed for treating corporate growth as the greatest virtue, built on the lie of the fiduciary duty to increase profit above all other considerations. I think he's entirely right about this, but it's a bit of a chicken-and-egg argument: we wouldn't have had a commercial web in the first place without a permissive, deregulated financial environment. My memory of working in the dot-com 1.0 bubble is that, outside of a couple of specific environments (the Silicon Valley area and the Boston-Cambridge corridor) venture capital was hard to find until late 1998 or thereabouts: the bubble's initial inflation was demand-driven rather than capital-driven, as the non-tech investment sector was late to the party. Caveat: I didn't win the lottery, so what do I know?)

The ad-supported web that we live with today wasn't inevitable. If you recall the web as it was in 1994, there were very few ads at all, and not much in the way of commerce. (What ads there were were mostly spam, on usenet and via email.) 1995 was the year the world wide web really came to public attention in the anglophone world and consumer-facing websites began to appear. Nobody really knew how this thing was going to be paid for (the original dot com bubble was all largely about working out how to monetize the web for the first time, and a lot of people lost their shirts in the process). And the naive initial assumption was that the transaction cost of setting up a TCP/IP connection over modem was too high to be supported by per-use microbilling, so we would bill customers indirectly, by shoving advertising banners in front of their eyes and hoping they'd click through and buy something.

Unfortunately, advertising is an industry. Which is to say, it's the product of one of those old-fashioned very slow AIs I've been talking about. Advertising tries to maximize its hold on the attention of the minds behind each human eyeball: the coupling of advertising with web search was an inevitable outgrowth. (How better to attract the attention of reluctant subjects than to find out what they're really interested in seeing, and sell ads that relate to those interests?)

The problem with applying the paperclip maximizer approach to monopolizing eyeballs, however, is that eyeballs are a scarce resource. There are only 168 hours in every week in which I can gaze at banner ads. Moreover, most ads are irrelevant to my interests and it doesn't matter how often you flash an ad for dog biscuits at me, I'm never going to buy any. (I'm a cat person.) To make best revenue-generating use of our eyeballs, it is necessary for the ad industry to learn who we are and what interests us, and to target us increasingly minutely in hope of hooking us with stuff we're attracted to.

At this point in a talk I'd usually go into an impassioned rant about the hideous corruption and evil of Facebook, but I'm guessing you've heard it all before so I won't bother. The too-long-didn't-read summary is, Facebook is as much a search engine as Google or Amazon. Facebook searches are optimized for Faces, that is, for human beings. If you want to find someone you fell out of touch with thirty years ago, Facebook probably knows where they live, what their favourite colour is, what size shoes they wear, and what they said about you to your friends all those years ago that made you cut them off.

Even if you don't have a Facebook account, Facebook has a You account—a hole in their social graph with a bunch of connections pointing into it and your name tagged on your friends' photographs. They know a lot about you, and they sell access to their social graph to advertisers who then target you, even if you don't think you use Facebook. Indeed, there's barely any point in not using Facebook these days: they're the social media Borg, resistance is futile.

However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.

This brings me to another interesting point about computerized AI, as opposed to corporatized AI: AI algorithms tend to embody the prejudices and beliefs of the programmers. A couple of years ago I ran across an account of a webcam developed by mostly-pale-skinned silicon valley engineers that have difficulty focusing or achieving correct colour balance when pointing at dark-skinned faces. That's an example of human-programmer-induced bias. But with today's deep learning, bias can creep in via the data sets the neural networks are trained on. Microsoft's first foray into a conversational chatbot driven by machine learning, Tay, was yanked offline within days because when 4chan and Reddit based trolls discovered they could train it towards racism and sexism for shits and giggles.

Humans may be biased, but at least we're accountable and if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

AI-based systems that concretize existing prejudices and social outlooks make it harder for activists like us to achieve social change. Traditional advertising works by playing on the target customer's insecurity and fear as much as on their aspirations, which in turn play on the target's relationship with their surrounding cultural matrix. Fear of loss of social status and privilege is a powerful stimulus, and fear and xenophobia are useful tools for attracting eyeballs.

What happens when we get pervasive social networks with learned biases against, say, feminism or Islam or melanin? Or deep learning systems trained on data sets contaminated by racist dipshits? Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts?

I think you already know the answer to that.

Look to the future (it's bleak!)

Now, if this is sounding a bit bleak and unpleasant, you'd be right. I write sci-fi, you read or watch or play sci-fi; we're acculturated to think of science and technology as good things, that make our lives better.

But plenty of technologies have, historically, been heavily regulated or even criminalized for good reason, and once you get past the reflexive indignation at any criticism of technology and progress, you might agree that it is reasonable to ban individuals from owning nuclear weapons or nerve gas. Less obviously: they may not be weapons, but we've banned chlorofluorocarbon refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UV-B radiation. And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave.

Nerve gas and leaded gasoline were 1930s technologies, promoted by 1930s corporations. Halogenated refrigerants and nuclear weapons are totally 1940s, and intercontinental ballistic missiles date to the 1950s. I submit that the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs have broken down.

Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn't an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

(Note that I do not have a solution to the regulatory problems I highlighted earlier, in the context of AI. This essay is polemical, intended to highlight the existence of a problem and spark a discussion, rather than a canned solution. After all, if the problem was easy to solve it wouldn't be a problem, would it?)

Firstly, Political hacking tools: social graph-directed propaganda

Topping my list of dangerous technologies that need to be regulated, this is low-hanging fruit after the electoral surprises of 2016. Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to indentify voters' political affiliations. They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues. The tools developed by web advertisers to sell products have now been weaponized for political purposes, and the amount of personal information about our affiliations that we expose on social media makes us vulnerable. Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyberwar attack via weaponized social media, as was the most recent French presidential election.

I'm biting my tongue and trying not to take sides here: I have my own political affiliation, after all. But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls. And this won't simply be billionaires like the Koch brothers and Robert Mercer in the United States throwing elections to whoever will hand them the biggest tax cuts. Russian military cyberwar doctrine calls for the use of social media to confuse and disable perceived enemies, in addition to the increasingly familiar use of zero-day exploits for espionage via spear phishing and distributed denial of service attacks on infrastructure (which are practiced by western agencies as well). Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned.

(By the way, I really hate the cyber- prefix; it usually indicates that the user has no idea what they're talking about. Unfortunately the term 'cyberwar' seems to have stuck. But I digress.)

Secondly, an adjunct to deep learning targeted propaganda is the use of neural network generated false video media.

We're used to Photoshopped images these days, but faking video and audio is still labour-intensive, right? Unfortunately, that's a nope: we're seeing first generation AI-assisted video porn, in which the faces of film stars are mapped onto those of other people in a video clip using software rather than a laborious human process. (Yes, of course porn is the first application: Rule 34 of the Internet applies.) Meanwhile, we have WaveNet, a system for generating realistic-sounding speech in the voice of a human speaker the neural network has been trained to mimic. This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it'll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don't like doing something horrible.

We're already seeing alarm over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels and adding their own advertising and keywords. Many of these channels are shaped by paperclip-maximizer advertising AIs that are simply trying to maximize their search ranking on YouTube. Add neural network driven tools for inserting Character A into Video B to click-maximizing bots and things are going to get very weird (and nasty). And they're only going to get weirder when these tools are deployed for political gain.

We tend to evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us—and we're already too vulnerable to fake news as it is. Soon they'll come for us, armed with believable video evidence. The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Paperclip maximizers that focus on eyeballs are so 20th century. Advertising as an industry can only exist because of a quirk of our nervous system—that we are susceptible to addiction. Be it tobacco, gambling, or heroin, we recognize addictive behaviour when we see it. Or do we? It turns out that the human brain's reward feedback loops are relatively easy to game. Large corporations such as Zynga (Farmville) exist solely because of it; free-to-use social media platforms like Facebook and Twitter are dominant precisely because they are structured to reward frequent interaction and to generate emotional responses (not necessarily positive emotions—anger and hatred are just as good when it comes to directing eyeballs towards advertisers). "Smartphone addiction" is a side-effect of advertising as a revenue model: frequent short bursts of interaction keep us coming back for more.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. Dopamine Labs is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable. It goes a bit beyond automated A/B testing; A/B testing allows developers to plot a binary tree path between options, but true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn't a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Let me give you a more specific scenario.

Apple have put a lot of effort into making realtime face recognition work with the iPhone X. You can't fool an iPhone X with a photo or even a simple mask: it does depth mapping to ensure your eyes are in the right place (and can tell whether they're open or closed) and recognize your face from underlying bone structure through makeup and bruises. It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI, and it can see where your eyeballs are pointing. The purpose of this is to make it difficult for a phone thief to get anywhere if they steal your device. but it means your phone can monitor your facial expressions and correlate it against app usage. Your phone will be aware of precisely what you like to look at on its screen. With addiction-seeking deep learning and neural-network generated images, it is in principle possible to feed you an endlessly escallating payload of arousal-maximizing inputs. It might be Facebook or Twitter messages optimized to produce outrage, or it could be porn generated by AI to appeal to kinks you aren't even consciously aware of. But either way, the app now owns your central nervous system—and you will be monetized.

Finally, I'd like to raise a really hair-raising spectre that goes well beyond the use of deep learning and targeted propaganda in cyberwar.

Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls around Me. (Spoiler: Apple pulled it like a hot potato when word got out.) The app works out where the user is using GPS, then queried FourSquare and Facebook for people matching a simple relational search—for single females (per Facebook) who have checked in (or been checked in by their friends) in your vicinity (via FourSquare). The app then displayed their locations on a map, along with links to their social media profiles.

If they were doing it today the interface would be gamified, showing strike rates and a leaderboard and flagging targets who succumbed to harassment as easy lays. But these days the cool kids and single adults are all using dating apps with a missing vowel in the name: only a creeper would want something like "Girls around Me", right?

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don't worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

Imagine you're young, female, and a supermarket has figured out you're pregnant by analysing the pattern of your recent purchases, like Target back in 2012.

Now imagine that all the anti-abortion campaigners in your town have an app called "babies at risk" on their phones. Someone has paid for the analytics feed from the supermarket and the result is that every time you go near a family planning clinic a group of unfriendly anti-abortion protesters engulfs you.

Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.) Or imagine you're in Pakistan and Christian/Muslim tensions are mounting, or you're in rural Alabama, or ... the possibilities are endless

Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.

And the worst bit of this picture?

Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, jews, or muslims; it's just a paperclip maximizer doing what it does—and you are the paper.

Cloks
Feb 1, 2013

by Azathoth
i feel as bad for the landlord not having money as the landlord would feel for me if i didn't have money

spacetoaster
Feb 10, 2014

crispyseaweed posted:

He has a follow up video which talks about floating Chinese saw mills in international waters.

I don't know about that, but China is buying a poo poo ton of our timber.

https://www.globalwoodmarketsinfo.com/

Petey
Nov 26, 2005

For who knows what is good for a person in life, during the few and meaningless days they pass through like a shadow? Who can tell them what will happen under the sun after they are gone?

that's pretty good. see also, and from around the same time, https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Grimmelmann-pp-217-33.pdf

Rutibex
Sep 9, 2001

by Fluffdaddy

Cloks posted:

i feel as bad for the landlord not having money as the landlord would feel for me if i didn't have money

the landlord would feel bad because he cant take money from you if you dont have any

Cloks
Feb 1, 2013

by Azathoth

Rutibex posted:

the landlord would feel bad because he cant take money from you if you dont have any

he? we need 👏 more landladies 👏

Slow News Day
Jul 4, 2007

Laterite posted:

"Regular people" with hundreds of thousands of dollars available for real estate purchases

I didn't mean "regular" as in regular, I meant non-investors.

HiHo ChiRho
Oct 23, 2010

Cloks posted:

he? we need 👏 more landladies 👏

Ugh these gendered terms are demeaning for our nonbinary rentiers

silentsnack
Mar 19, 2009

Donald John Trump (born June 14, 1946) is the 45th and current President of the United States. Before entering politics, he was a businessman and television personality.

PawParole posted:

what with work from home the migration from California will probably accerate.

imagine living in Nashville on a California salary.

not too familiar with Nashville region but over on the east even Knoxville is still disproportionately expensive to rent. If you want cheap and you're not buying a condemned mold pit, you'll be looking more at like living 30 miles from civilization out in Roane/Morgan County, or an industrially-blighted shithole ghost town like Clinton or Kingston.

human garbage bag
Jan 8, 2020

by Fluffdaddy
why not build a house out of plastic?

mawarannahr
May 21, 2019

I referred to my landperson the other day by accident but had meant landlord. Is there a need for an additional gender-neutral term such as this among the landpeople?

Zil
Jun 4, 2011

Satanically Summoned Citrus


NeonPunk posted:

In Austin, blaming California is the to go retort for everything. Traffic jam? Blame it on California's moving in! Gentrification? California. House prices? California. That has been going on for like the past 5 years.

To be fair, all the people complaining about Californians, have been Californians :v:

Pittsburgh Fentanyl Cloud
Apr 7, 2003


Slow News Day posted:

No, this is definitely not the only reason. Tons of regular people, whose stock portfolios exploded in 2020 thanks to the Fed, are also using cash now to buy houses where previously they would have taken out a mortgage. There are also those who have sold their more expensive homes in other states and have moved to cheaper states thanks to WFH policies. I have friends here in Austin who say they keep getting outbid by tech people moving here from California, for example. That's not a new thing but it certainly accelerated because of the pandemic.

Lol yeah, everyone knows that when you get outbid for a house you get a dossier on the winner.

euphronius
Feb 18, 2009

the Landthem

Adbot
ADBOT LOVES YOU

Pittsburgh Fentanyl Cloud
Apr 7, 2003


The work from home migration from California to Possum Ridge, if it exists at all, is insignificant in the face of institutional and investor money.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply