Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
Precambrian Video Games
Aug 19, 2002



Amazon Smile is ending. Incidentally, the quote about it not being integrated with the app isn't true (at least on Android), not that it matters anymore. I'm mostly surprised that this was actually costing Amazon enough that they'd bother to axe it, although perhaps they did some math and figured they weren't gaining much goodwill from their overwhelmingly generous 0.5% donations.

Adbot
ADBOT LOVES YOU

Precambrian Video Games
Aug 19, 2002



ErIog posted:

ML is, itself, also heavy marketing smoke too. Less so for some implementations, but it's an industry where people frequently call automated regression, "machine learning," when it's clearly an application of statistical modeling.

If we do ever get actual ML or AI for real, what we currently call ML or AI will feel as aspirational as those "hoverboard," scooters.

An AI-enthusiast colleague of mine defined machine learning as anything from linear regression and beyond, so I'm afraid that particular semantic ship has sailed.

Precambrian Video Games
Aug 19, 2002



dr_rat posted:

Five bees, sorry but you got ripped off. No one pays more than two bees for a belt onion these days, old timer.

I bet they do in the Philippines.

Precambrian Video Games
Aug 19, 2002



But if the chatbot is hosted locally on your machine and maintains only minimal diagnostic logs and no record of user interactions?

Precambrian Video Games
Aug 19, 2002



GhostofJohnMuir posted:

i knew that the h-1b visa program is severely hosed and allows companies to be highly exploitative with their foreign workers, but i guess i still hadn't realized the scale of the problem. is it typical to for someone to have been in the country for over a decade and still be on an h-1b? i guess i naively assumed that there would be an easy transition to permanent residency after a number of years

H-1Bs are for 3 years, extendable to 6, with a few ways to go beyond that. That guy's LinkedIn says he was in India until 2019, though, so he's probably still on his first H-1B.

It's a huge hassle to get a green card, doubly so for Indian citizens.

Precambrian Video Games
Aug 19, 2002



SniHjen posted:

This is the problem I have with this discussion, asking ChatGPT a question, is the same as asking anyone a question.

No, asking a machine that is incapable of understanding anything is not the same as asking a human that is actually intelligent and capable of critical thought.

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I really don't see how ChatGPT is any different.

The difference is the sheer volume of bullshit that machines can generate near-instantaneously. I'll quote this section of an Ezra Klein interview with Gary Marcus again (originally posted in the ChatGPT thread):

quote:

EZRA KLEIN: Let’s sit on that word truthful for a minute because it gets to, I think, my motivation in the conversation. I’ve been interested — I’m not an A.I. professional the way you are, but I’ve been interested for a long time. I’ve had Sam on the show, had Brian Christian on the show. And I was surprised by my mix of sort of wonder and revulsion when I started using ChatGPT because it is a very, very cool program. And in many ways, I find that its answers are much better than Google for a lot of what I would ask it.

But I know enough about how it works to know that, as you were saying, truthfulness is not one of the dimensions of it. It’s synthesizing. It’s sort of copying. It’s pastiching. And I was trying to understand why I was so unnerved by it. And it got me thinking, have you ever read this great philosophy paper by Harry Frankfurt called “On Bullshit”?

GARY MARCUS: I know the paper.

EZRA KLEIN: So this is a — welcome to the podcast, everybody — this is a philosophy paper about what is bullshit. And he writes, quote, “The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”

And his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. He has this great line where he says that people telling the truth and people telling lies are playing the same game but on different teams. But bullshit just has no relationship, really, to the truth.

And what unnerved me a bit about ChatGPT was the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all. And I’m curious how you see that concern.

GARY MARCUS: It’s exactly right. These systems have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said and trying to maximize the probability of that. It’s just auto complete, and auto complete just gives you bullshit.

And it is a very serious problem. I just wrote an essay called something like “The Jurassic Park Moment for A.I.” And that Jurassic Park moment is exactly that. It’s when the price of bullshit reaches zero and people who want to spread misinformation, either politically or maybe just to make a buck, start doing that so prolifically that we can’t tell the difference anymore in what we see between truth and bullshit.

EZRA KLEIN: You write in that piece, “It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.” Why? Walk me through what that world could look like.

GARY MARCUS: Let’s say if somebody wants to make up misinformation about Covid. You can take a system like Galactica, which is similar to ChatGPT, or you can take GPT-3. ChatGPT itself probably won’t let you do this. And you say to it, make up some misinformation about Covid and vaccines. And it will write a whole story for you, including sentences like, “A study in JAMA” — that’s one of the leading medical journals — “found that only 2 percent of people who took the vaccines were helped by it.”

You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.

We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.

And I’ve been having this argument with Yann LeCun, who’s the chief A.I. scientist at Meta, and he’s saying, well, this isn’t really a problem. But already we’ve seen that this kind of thing is a problem. So it was something that really blew my mind around Dec. 4. This was right after ChatGPT came out. People used ChatGPT to make up answers to programming questions in the style of a website called Stack Overflow.

Now everybody in the programming field uses Stack Overflow all the time. It’s like a cherished resource for everybody. It’s a place to swap information. And so many people put fake answers on this thing where it’s humans ask questions, humans give answers, that Stack Overflow had to ban people putting computer-generated answers there. It was literally existential for that website. If enough people put answers that seemed plausible but we’re not actually true, no one would go to the website anymore.

And imagine that on a much bigger scale, the scale where you can’t trust anything on Twitter or anything on Facebook or anything that you get from a web search because you don’t know which parts are true and which parts are not. And there’s a lot of talk about using ChatGPT and its ilk to do web searches. And it’s true that some of the time. It’s super fantastic. You come back with a paragraph rather than 10 websites, and that’s great.

But the trouble is the paragraph might be wrong. So it might, for example, have medical information that’s dangerous. And there might be lawsuits around this kind of thing. So unless we come up with some kinds of social policies and some technical solutions, I think we wind up very fast in a world where we just don’t know what to trust anymore. I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.

EZRA KLEIN: But isn’t it the case that search can be wrong now? Not just search — people can be wrong. People spread a lot of misinformation — that there’s a dimension of this critique that is holding artificial intelligence systems to a standard the society itself does not currently meet?

GARY MARCUS: Well, there’s a couple of different things there. So one is I think it’s a problem in difference in scale. So it’s actually problematic to write misleading content right now. Russian trolls spent something like a million dollars a month, over a million dollars a month during the 2016 election. That’s a significant amount of money. What they did then, they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.

That’s got to make a difference. I mean, it’s like saying, we had knives before. So what’s the difference if we have a submachine gun? Well, submachine gun is just more efficient at what it does. And we’re talking about having submachine guns of misinformation.

So I think that the scale is going to make a real difference in how much this happens. And then the sheer plausibility of it, it’s just different from what happened before. I mean, nobody could make computer-generated misinformation before in a way that was convincing.

In terms of the search engines, it’s true that you get misleading information. But we have at least some practice — I wish people had more — at looking at a website and seeing if the website itself is legit. And we do that in different kinds of ways. We try to judge the sources and the quality. Does this come from The New York Times, or does it look like somebody did it in their spare time in their office and maybe it doesn’t look as careful? Some of those cues are good and some are bad. We’re not perfect at it. But we do discriminate, like does it look like a fake site? Does it look legit and so forth.

And if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.

Boris Galerkin posted:

Because it's a useful tool for some people out there who know how to use it and have a reason to use it? See: Photoshop, AutoCAD, Solidworks, etc, etc, etc. All tools which are generally useless for the vast majority of people, but extremely useful for some.

e: Maya, Blender, Final Cut Pro, Illustrator, etc

I don't follow what the relevance of the comparisons here is supposed to be. But note that the discussion of ChatGPT can be separated from that of image generators like DALL-E because the former can and often is used to answer questions with an actual verifiable correct response whereas image generators are not. Granted, you can ask ChatGPT to generate poetry or other creative writing too, but I don't think anyone is bother to ask DALL-E to solve a math problem photo realistically in the style of Picasso.

Precambrian Video Games fucked around with this message at 16:47 on Jan 29, 2023

Precambrian Video Games
Aug 19, 2002



Boris Galerkin posted:

ChatGPT can be used to generate more conspiracy theories by the vast majority of people sure. But a small minority of people could also use it to do much, much more.

Such as? And regardless of your answer, "it can maybe be used for unspecified good" is not a coherent rebuttal to the point that it is fundamentally incapable of reasoning or distinguishing truth and fiction.

Boris Galerkin posted:

The underlying point is that it's a tool, no different from anything else. People believing conspiracy theorist's blogs over subject matter expert is a people problem, not a tech problem (even though tech is what enables their reach).

Some tools are harmful, you know? So give an example of a similar preexisting tool that gives you pages of bullshit with minimal effort. And "Google search" is not a good answer, because while it certainly is a tech nightmare of its own, at least it directly links to sources and doesn't usually try to give a (potentially completely wrong) definitive answer. Hell, it's integrated with a calculator that usually does give the right answer.

Precambrian Video Games
Aug 19, 2002



Vegetable posted:

The Ezra Klein interview seems consistently misguided. A big part of their concern is “AI has driven the cost of bullshit down but it hasn’t done anything to the cost of truth.”

A huge, if not the biggest, cost of truth is the writing. How many questions go unanswered because you have to write the drat answer? How many more educational, legal and otherwise productive texts could be written if you could start with a half-correct base and redraft it from there? As any writer knows, it’s far easier to edit than to compose.

To be clear, I’m not making the bigger argument that the world becomes more truthful as a result of ChatGPT. It’s too early to say anything one way or another. But it’s super easy to see how AI can make life so much easier for truth tellers, and the pessimists need to at least reckon with that side of things.

By suggesting that GPT come up with a half-correct base, you're conflating the writing of generic prose with the more important step of coming up with a worthwhile idea and doing the research to support the conclusions (that bit should be the most time-consuming aspect). I don't need to waste time considering hypothetical genre-defining works co-authored by ChatGPT when there are already perfectly good examples of bargain basement outlets like CNET and BuzzFeed hopping on the GPT train to flood the internet with more generic crap just to cut their workforce even closer to the bone.

If your broader point is that there is a use for a tool to assist in composing generic prose, perhaps. If we're talking about scientific texts, then I'd say that the prose tends to be the least important aspect of a research paper, and I'd rather find an alternative to the old-school manuscript to obviate the need to write all that prose in the first place. For more philosophical texts, I think it's harder to separate the prose from the argument when there isn't a definitive right answer.

Precambrian Video Games fucked around with this message at 18:37 on Jan 29, 2023

Precambrian Video Games
Aug 19, 2002



Since it has been at least a week since the last complaint about Adobe Creative Cloud, I wrote some boring poo poo complaining about it and replaced it with this summary:

- If you have Acrobat Reader and try to create a pdf from another file (jpg/png, whatever) while logged in to an Adobe account, it will upload it to ~the cloud~ and make you download the converted file, giving you no option that I could see to save it locally only.
- If you do have an Adobe account (my employer pays what I'm guessing is an exorbitant sum for subscriptions), you'll need to uninstall Reader, possibly reinstall the unimaginably useless Creative Cloud app, and then download the offline installer for Acrobat Pro, which is 1.2GB for some unfathomable reason. Then as far as I can tell conversion happens entirely locally and you can concatenate files too.

Yes, I know that you can do most of this poo poo with free tools. I could also complain about institutions converting their perfectly functional and probably much cheaper email systems with Office365 poo poo.

Precambrian Video Games
Aug 19, 2002



Finally, Facebook gave me an ad I couldn't possibly hate more:

Precambrian Video Games
Aug 19, 2002



Boris Galerkin posted:

Cancel Scott Kelly.



Guy signing up to spend months on end in a tiny metal box telling me to go outside! gently caress off, you go for a walk outside!

Precambrian Video Games
Aug 19, 2002



You could just listen to this poster and get the:


... which exists and is nearly the same size as the mini.

Or in Android land, the annoyingly bigger but much better Galaxy S non-plus phones. I still hate the 21:9 aspect ratio because you can't reach the whole screen one handed but you may as well go to the Android thread and complain about the death of removable batteries/SD cards/headphone jacks and see what good that does.

Precambrian Video Games
Aug 19, 2002



evilweasel posted:

given that we are currently experiencing problems with inflation, we definitely cannot just print more money.

a lot of people got very bad ideas about how economics works from the past 15 years or so (very reasonably, given this is long enough that it is much of the adult life of many people): it is not normal for interest rates to be effectively zero, which they've basically been since Lehman collapsed up until a year ago. during that time period we were experiencing a demand-side shortfall (i.e. not enough people demanding goods and services compared to what the economy could supply) which is a circumstance you can print money without causing inflation. however, during normal times, printing money causes inflation (even if you are a global reserve currency). during times where the economy may be supply-locked (i.e. the economy cannot produce the amount of goods and services being demanded at current prices) printing money causes a lot of inflation.

Which were the "normal" times in the last, say, 50 years, and when do you expect things to get back to "normal"? Because to me it looks like economics (and mainstream theories thereof) is going off the rails - literally at times - and pundits are grasping at straws to explain what happened in the last 6-12 months, let alone predict what's coming next.

Precambrian Video Games
Aug 19, 2002



cinci zoo sniper posted:

The answer seems to be written right there in the post, pre-BFC. Also, you're conflating “normal times” with interest rates being meaningful in western economies.

I assume you mean pre-GFC, and to that end, maybe?



As I've had it explained, the job of the US fed is to tweak interest rates to keep inflation at 2%. I suppose that graph shows that rate hikes may have actually done that from 2004-2006, although not to the same degree as in the 1980s. But...

evilweasel posted:

you can see how aberrant the last 15 years are for interest rates here: https://www.macrotrends.net/2015/fed-funds-rate-historical-chart. what is "normal" in an economy is different from what is "normal" for interest rates: currently, we have more historically normal interest rates but our economy is highly abnormal right now as part of the aftershocks of covid.

mainstream economics isn't really going off the rails at all right now.

Part of the reason I asked when the last time economics was "normal" was that the start of the GFC was what, 5-6 years removed from the end of the dot-com bubble bursting? Was the economy otherwise normal while these two massive bubbles were inflating, until they burst and things were very much not normal again? There hasn't been a similar bubble burst in 15 years now, is that surprising or did the pandemic just delay the inevitable (tech again?)?

evilweasel posted:

there are basically two issues you have to understand when understanding economics and when it's useful vs not. first: often the results of economic theory are essentially very political decisions. there is a very very very high risk of intentionally or unintentionally biasing economic theory to fit desired political outcomes. the closer an economic theory comes to giving political advice, the reality is the more likely it is to be garbage. a lot of what does give political advice is understood and known if it's bad or good, but actual economics is ignored by people in power in favor of cranks that say what they want to say. however, this is not really a critical issue for this particular discussion - but it's really important to keep in mind.

the second is: getting reliable data is very very hard. take a fairly discrete issue that has a right, knowable answer: how many people in the united states have a job right now? that is something we have spent decades and decades and decades trying to measure and we're still not very good at it: we frequently have to go back and significantly revise previous employment estimates. and that's despite all of the information being collectable in the united states, and nobody really having an incentive to lie.

for the current economic situation, we are in fairly uncharted waters - a global pandemic on a scale that hasn't really happened in 100 years - and there has been a lot of just basic data collection that has been really hard to get. set aside job numbers: it has been tremendously difficult to get a firm grasp of the level of fuckery in the supply chain, how permanent it is vs how temporary it is, and so on. much of this data is not public, in the hands of people with an incentive to lie about it, and/or abroad.

I'm not really convinced that economic theories that fall apart in the absence of perfect data would actually work that well if such data existed, but it's not really a field I follow so I won't belabor the point. The question I'm more interested in is when are conditions expected to return to "normal", or are we going to continue to find ourselves in unusual situations that defy explanation for the foreseeable (or not) future, in which case what use are these theories for spherical frictionless cows?

evilweasel posted:

now basically, to vastly oversimplify: inflation is when demand (at current price levels) outstrips supply over the economy as a whole. usually this is because demand is artificially boosted. if you give everyone a pile of money, they go out and spend it. that's why "printing money" causes inflation, basically: there's more money to spend, ergo more demand, but not more supply. that also happens when people are just wildly overconfident about the economy because they're paper rich (because the economy is overheating and they have invested in bubbles). the basic way you deal with this via monetary policy is to take actions that effectively reduce the money supply (thereby lowering demand) and increase interest rates (thereby also lowering demand).

here, we have a very odd situation where the cause of inflation is abruptly limited supply. the question that has bedeviled people is: how limited, for how long? hence, the whole debate over if inflation would be "transitory" or not. if the supply situation is going to unfuck itself within a month or two - there's no reason to go out and destroy demand, because then supply unfucks itself before monetary policy destroys demand, then demand gets destroyed, and you have oversupply and a recession.

what the fed has been trying to do is, basically, exactly tune the demand destruction to match up with supply so prices stop increasing. if they undershoot demand destruction, then inflation persists. if they overshoot, then they cause a recession. if they get it exactly right, you get a "soft landing" where inflation stops and the economy cools down to its "natural" growth rate.

the issue is that there are tons of tools in the toolbox macroeconomics has proposed, many of which work very very well and in fact work better than monetary policy. the issue is: they require congressional action and are much, much more political than monetary policy (which both parties have mostly agreed to keep out of politics, as a government loving around with monetary policy directly tends to lead to very bad economic results - Turkey is currently learning that right now). so, in the United States, they're virtually unusable unless the republican party feels like saving the economy that year, and the only real levers that work are those controlled by the fed.

Economists - or at least, the pundits that make the news - are still making oblique references to pandemic stimulus affecting inflation now, years after the fact (at least in the US; I gather most other countries are winding down their COVID stimulus/support as well e.g. CERB). Last year they were still arguing about whether supply chain disruptions were to blame for inflation or not. Paul Krugman was going on about being on Team Transitory and declaring victory over inflation up until a couple of weeks ago when monthly inflation numbers were revised significantly upwards, then he shrugged and went back to blaming unreliable data and concern trolling about "wage inflation". And actually there were a bunch of graphs showing that a significant amount of pandemic stimulus went to "excess" savings... for a time, until they didn't.

Anyway, I don't want to give the impression that I understand any of this, I'm mostly just skeptical of the existence of this "normal" that we're supposed to be heading towards. I find it more likely that the near future will be punctuated by yet more unique crises that defy explanation and also end up increasing the gap between the rich and everyone else, again.

Precambrian Video Games
Aug 19, 2002



evilweasel posted:

i took this second part out of order because i think it gets at the key issue here. i think you're ascribing something to the concept of "normal" that is very different than what it actually means. first, what i said is that it was not normal for interest rates to be effectively zero. which, as the graphs both you and i posted, is pretty indisputably correct. that has had significant warping effects on the economy that will not be present when interest rates are in a more normal range. that does not mean that "all will be normal" at all other times, nor that "normal" is some sort of judgement that Things Are Good. it means things are more normal in how the economy responds to things because interest rates are at a non-zero level.

the dot-com bubble was relatively normal. the boom/bust cycle of economies is very historically normal - it comes from, basically, imperfect information leading to swings of overinvestment (a bubble) / underinvestment (a recession). the current tech boom not popping (and the tech boom getting where it did) for so long is historically abnormal but not really surprising. it is related to zero interest rates: the whole point of low interest rates is to spur investment to combat underinvestment in a recession, but it is well known that causes bubbles which results in the need to "cool down" the economy by raising interest rates.

the current tech bubble is, basically, why a discussion of interest rates makes sense in a tech thread. the prevalence of incredibly stupid tech ideas that got scads of money is a result of near-zero interest rates: because near-zero interest rates force capital looking for returns into more and more speculative investments because there's more capital looking for those returns than there are safe, clearly good ideas for investment.

again, you seem to be thinking "normal" is a normative judgement where normal = good. that's not the case. normal means, well, the usual course of an economy - i.e. not dealing from the aftershocks of a global financial crisis that provoked a very long and deep recession, and not dealing with a once-in-100-years pandemic. that does not mean that when we return to normal you will get everything you want. the divergence between the rich and poor is largely related to tax policy and over levers over how you distribute the gains of the economy that are largely under the control of congress and not the fed.

I'm not disputing that near-zero interest rates are abnormal. Let me try to summarize in almost-chronological order, and correct me where I'm wrong or you disagree:

- The GFC was caused largely by subprime lending, an avenue of unusually cheap credit in a time of low (2002-2004) but not quite historically or abnormally low (2005-2007) interest rates. Either way, it wasn't tech and is not really the thread topic.
- The dot-com bubble was standard speculative investing in a time of fairly normal interest rates. I'm unaware of a source of cheap credit that fuelled it but I wasn't exactly following politics closely then. Otherwise, is there some useful insight into what caused it?
- The crypto bubble has sort of half-popped, largely due to regulatory action and not so much thanks to interest rate hikes. SBF and FTX's collapse haven't destroyed the "industry" just yet, and BTC in particular has been unusually stable for 6+ months.
- The current tech bubble outside of crypto is sort of deflating, although the NASDAQ peaked in Nov 2021, a few months before Fed rate hikes. So it's unclear that rising interest rates caused the slightly diminished exuberance even if near-zero interest encouraged it in the first place. Didn't both investors and companies also have huge piles of cash on hand that they had nothing else to do with, leading to stock buybacks in the latter case?

Back to the original discussion, I'm concerned by the rather blase attitude by some pundits as to whether further interest rate hikes might trigger a recession and whether that's necessary or worthwhile to control inflation (see also the debate over whether 2% inflation is even possible anymore or whether it should be 3%, though that's also not really a thread topic). The Fed especially being wrong about what's causing inflation this time around seems like it might have extreme consequences.

Precambrian Video Games
Aug 19, 2002



evilweasel posted:

the key about getting chatGPT to do simple programs is it is rather easy to validate the answer

if you ask chatGPT to write you an essay with sources it is much harder to determine if it's bullshitting you, compared to copy/pasting the code and seeing if it compiles

Besides that compiling is a low bar but still higher than not having syntax errors in interpreted languages, that depends on what you mean by simple. I don't use ChatGPT (or Copilot) myself but I've seen plenty of posted examples where it will write something that works fine for a specific input example but not something very slightly different, which I suppose is fine if you just need a one-off that you can easily validate but not so much otherwise. Of course you can ask it to write unit tests if you intend to reuse the code at all. Funny enough, the first two results I found for that are a positive one (starting in Java and then using this testing cloud testing framework they're trying to sell) and a much more negative one with Python/selenium.

And on that note, Do Users Write More Insecure Code with AI Assistants?:

quote:

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

Precambrian Video Games
Aug 19, 2002



StumblyWumbly posted:

Has there been anything more written about why/how SVB was encouraging folks to keep all their money with them, instead of spreading it around to other banks?

It seems like you're asking why a bank was seeking wealthy depositors instead of telling them to give someone else money and I'm confused as to why you think this needs an explanation.

Precambrian Video Games
Aug 19, 2002



StumblyWumbly posted:

Why did the depositors do it. Why did some VCs require the startups keep money with SVB.
Banking risk is pretty small, but also seems pretty easy to avoid.

Based on a bunch of tweets, depositors did it partly because they couldn't get anyone else to lend them money to start their disruptive business, or they got better terms from SVB on loans, or their investors made them do it, or SVB itself strongly encouraged them to:

https://twitter.com/girdley/status/1634634172290457600
https://twitter.com/girdley/status/1634635729157382146

I've never had >$250k so I don't know what exactly was stopping startups from opening accounts elsewhere (other than SVB refusing to transfer funds out to other banks, if that's even legal). As for VCs, I guess managers aren't using their own money and didn't care enough, and investors either didn't know or were just regular stupid.

Precambrian Video Games
Aug 19, 2002



Arsenic Lupin posted:

When you (you a corporation, not you a person) have tens of millions, parcelling them out into individual 250K deposits is nearly impossible.

Perhaps, although apparently that hadn't stopped Giannis Antetokounmpo opening 50+ 250k accounts. I just don't understand what incentives a bank could offer a company with multiple millions in deposits to keep everything there as opposed to at least a handful of other accounts.

Precambrian Video Games
Aug 19, 2002



Epic High Five posted:

And the worst part is it's too long for a thread title

Breaking news: PlaygroundAI user “N-word Balls” makes a pretty convincing goatse.

Precambrian Video Games
Aug 19, 2002



Maybe they shouldn't have trained it on so many eulogies of deceased Google products (except some of them were actually good).

Precambrian Video Games
Aug 19, 2002



So it's like this tweet, but for rollershoes. Cool.

https://twitter.com/elonmusk/status/1354680585139187713?lang=en

Precambrian Video Games
Aug 19, 2002



Kyte posted:

Sure except it's already been shown to work?

By what, promotional videos? I clicked on a CNN article from last November and it has the same 5 second clip of a guy stopping and climbing stairs. All of the other footage is walking forwards and provided by the company. Their own FAQ says this about how to stop:

quote:

The simple answer is that you just stop walking or slow down your walking pace. Our AI reacts instantaneously to your gait / how you walk and stopping is seamless. From top speed to zero, you can stop within 3 feet, or about the same distance as you would stop if you were jogging.

I dunno about you but I'd expect a little more detail from a $1400 product that can send me careening into a wall or faceplanting with no manual controls.

Hell they're literally called Moonwalkers and it's not actually clear if you can even walk backwards in them.

Oh okay, here's a quote from the user manual:

quote:

Moonwalking for an extended period can increase the internal temperature of the Moonwalkers above 122°F/50°C. When this happens, your speed will be reduced, and after some time, the shoes will stop with the status indicator lights flashing their respective code (see section 10 understanding your Moonwalkers status indicators). Allow your Moonwalkers to cool by bringing them inside, and you will be ready to walk again.

Precambrian Video Games fucked around with this message at 00:31 on Mar 28, 2023

Precambrian Video Games
Aug 19, 2002



https://twitter.com/CBCNews/status/1641040123721658369

Someone's supposedly running scams against older parents with not-entirely convincing deepfakes of their adult children's voices.

quote:

When Donna Letto got a phone call one day in December, she says, she recognized the voice on the other end of the phone.

Or at least, she believed she did.

"I thought it was my son," the St. John's woman told CBC News in a recent interview.

Her son's voice sounded a little off, though, and she asked him about it.

"I said, 'You've got a bit of a cold, don't you?' He said, 'Yeah, I've got a cold the last couple of days but there's nothing to it. It's not COVID.' Then he said, 'I've been in an accident in Toronto and I hit a woman that's pregnant and she's been sent to the hospital with multiple injuries,'" said Letto.

"He said, 'Mom and Dad, I need your help.' We were really taken. I thought for sure it was our son. The kids never asked us for help before."

I'm not really sure what convinced them that this is actually a deepfake rather than a close-enough voice filter, besides some AI expert saying that it's possible and also the victims' own embarrassment at nearly falling for it.

Precambrian Video Games
Aug 19, 2002



https://twitter.com/kashhill/status/1641827342187102214

It's exactly what you think:

quote:

according to Clare Garvie, an expert on the police use of facial recognition, there are four other publicly known cases of wrongful arrests that appear to have involved little investigation beyond a face match, all involving Black men.

...except somehow worse than that:

quote:

On the Friday afternoon after Thanksgiving, Randal Quran Reid was driving his white Jeep to his mother’s home outside Atlanta when he was pulled over on a busy highway. A police officer approached his vehicle and asked for his driver’s license. Mr. Reid had left it at home, but he volunteered his name. After asking Mr. Reid if he had any weapons, the officer told him to step out of the Jeep and handcuffed him with the help of two other officers who had arrived.

“What did I do?” Mr. Reid asked. The officer said he had two theft warrants out of Baton Rouge and Jefferson Parish, a district on the outskirts of New Orleans. Mr. Reid was confused; he said he had never been to Louisiana.

Mr. Reid, a transportation analyst, was booked at the DeKalb County jail, to await extradition from Georgia to Louisiana. It took days to find out exactly what he was accused of: using stolen credit cards to buy designer purses.

quote:

The Sheriff’s Office has a contract with one facial recognition vendor: Clearview AI, which it pays $25,000 a year. According to documents obtained by The Times in a public records request, the department first signed a contract with Clearview in 2019.

Clearview scraped billions of photos from the public web, including social media sites, to create a face-based search engine now used by law enforcement agencies. Mr. Reid has many public photos on the web linked to his name, including on LinkedIn and Facebook. The public information office for the Jefferson Parish Sheriff’s Office did not respond to requests for comment about the use of Clearview AI.

quote:

To get a warrant to arrest someone, an officer must convince a judge there is probable cause — meaning, essentially, there is a good reason to do so — and get the judge’s signature. In the past, that meant an officer had to go to court, or even meet a judge at a diner in the middle of the night if the case was urgent. That is a moment when questions are asked about the strength of the evidence, legal experts say.

But the friction of getting a warrant has been eased by technology. The Jefferson Parish Sheriff’s Office uses an “eWarrant” service, CloudGavel, for which it paid $39,800 last year. It’s an app that allows officers to request digital signatures from judges. “Law enforcement officers can now get an arrest warrant approved in minutes,” the company’s website states.

Many civil liberties advocates actually favor electronic warrants; they allow judges to more easily review decisions made by the police and eliminate a complaint from officers that it’s too hard to get a warrant. But advocates said it would be worrisome if judges were simply clicking a button without asking questions or providing sufficient scrutiny.

CloudGavel. You can't make this poo poo up.

quote:

Why exactly Mr. Reid and his white Jeep attracted the DeKalb County police’s attention that day is unclear. The arresting officer wrote in an incident report that he had learned about Mr. Reid’s warrants from a “random GCIC/NCIC query of the vehicle tag,” referring to the National Crime Information Center, an F.B.I. repository of wanted persons and vehicles, and the Georgia Crime Information Center. It’s possible the officer saw Mr. Reid driving by and, for some reason, decided to run his license plate.

But Molly Kleinman, the director of a technology policy research center at the University of Michigan, said many kinds of surveillance technologies on the highway could have alerted the officer to Mr. Reid’s presence on the “hot list,” including toll pass readers and automated license plate readers, which Atlanta has in the hundreds on roads and police vehicles. (A spokesman for the DeKalb County police said a license plate reader was not used.)

I'm seriously confused at this point, the police prefer to say yes, we ran this guy's plates for no reason other than he was a darker-skinned man driving a white Jeep, than that cameras automatically read license plates??

quote:

His lawyer, Mr. Calogero, gathered photos and videos of Mr. Reid from his family, hoping to more clearly show the Louisiana police what Mr. Reid looks like, and sent them to the Jefferson Parish Sheriff’s Office on Wednesday, Nov. 30, five days after the arrest. An hour later, Mr. Calogero said, an officer called to inform him that the police were withdrawing the warrant because they had noticed a mole on Mr. Reid’s face that the alleged purse thief did not have.

Important to note that the mole apparently exonerated him and not, for example, being 500-odd miles away from the scene of the crime.

Precambrian Video Games
Aug 19, 2002



woke kaczynski posted:

A whole page and nobody mentions that one black mirror episode? hosed up

One?

Precambrian Video Games
Aug 19, 2002



Papercut posted:

Even when I get a voicemail, I just read the speech to text, I don't actually listen to it

... except the only relevant voicemails I get have numbers, names, dates and/or times in them, all of which speech-to-text is liable to butcher.

Precambrian Video Games
Aug 19, 2002



I am utterly shocked that the US military is a grift that just makes poo poo up constantly with minimal oversight. That's before considering that the story was about "AI".

Precambrian Video Games
Aug 19, 2002



Civilized Fishbot posted:

Was there some situation like this with TVs (or color TVs) where buying them wasn't appealing because there wasn't a lot of content for them and making content for them wasn't appealing because there weren't a lot of people buying them?

Is 3D TV not technically dead yet?

Precambrian Video Games
Aug 19, 2002



LASER BEAM DREAM posted:

I just sat through a town hall at my very large company where they demoed Github Copilot and Office Copilot. People literally clapped when they saw this Outlook demo summarize a thread and generate a reply based on data from a separate Word document.

They clapped because it shows that upper-middle managers are practically illiterate and can be replaced by a sophisticated magic eight ball, right? Or because they're looking forward to the flood of emails read and written exclusively by software, absolving them of responsibility for any decisions they make?

Precambrian Video Games
Aug 19, 2002



Non-Functional Transport?

Precambrian Video Games
Aug 19, 2002



Juicero was at least a good name, they just needed to lean a little harder into the ero aspect of it.

Precambrian Video Games
Aug 19, 2002



Family Values posted:

The actual answer IMO is to stop structuring our economy so that anyone that wants a professional career has to pile into a handful of cities, and also stop expecting people to fill up cubicle farms for work that could be done from home.

We tried that and it ruined commercial real estate speculation so it will never happen again.

Precambrian Video Games
Aug 19, 2002



The Metaverse as a home for wildly impractical architecture seems to make sense. Conceptualizers can erect all of the flowing curved glass monoliths which would be impossible to build in reality and not have to worry about focused reflections lighting passersby on fire or falling misshapen panes shattering on their heads. Now I'm not sure how to monetize this since it has already been done in Second Life probably decades ago but I leave that as an exercise to the reader.

Adbot
ADBOT LOVES YOU

Precambrian Video Games
Aug 19, 2002



Rent-A-Cop posted:

Nope, and it will never be a thing. A limited amount of sunlight hits the Earth, and the amount that hits a car-sized bit of Earth, even on a perfect day, isn't enough to charge a car.

Slap some solar panels on this thing and I'm sure it could take you, uh, somewhere much slower and less comfortably than an ebike:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply