|
something needs to provide executive function on top of all the pattern marchers people have been building symbolic AI is coming back for sure 2019 year of cyc
|
# ¿ May 27, 2019 07:26 |
|
|
# ¿ Mar 29, 2024 13:31 |
|
ah but you’re forgetting that the ML version will have unpredictable results handling unexpected input!
|
# ¿ May 29, 2019 17:10 |
|
neural networks (ML) aren’t entirely horseshit, it’s just that what they actually let you do is far less than what a bunch of idiots eager for your cash let you do animals do use neural nets for “everything,” sure, but real neurons are a bit more than simple integrators, so we’re not really very close to actually using neural nets the way animals do and getting the benefits of a faster substrate instead we’re able to do basically one simple task in a way we don’t really understand, and we push even that far beyond reasonable use if I were asked to make an “object recognizer,” I wouldn’t train one huge network on a million images of the object I want to recognize and allow steganography to break everything, I’d train a large number of smaller networks on different characteristics to recognize, and use additional separate networks to determine confidence according to recognitions, etc. finally arriving at the one confidence value
|
# ¿ Jun 1, 2019 07:15 |
|
I know this “Thou shalt not make a machine in the likeness of a human mind.”
|
# ¿ Jun 5, 2019 07:05 |
|
ontology redistributes phylactery
|
# ¿ Nov 8, 2019 02:20 |
|
I kept reading Herzberg as Herzog clearly non-machine learning also has a ways to go
|
# ¿ Nov 8, 2019 06:36 |
|
too high a risk of Verizon Math
|
# ¿ Nov 13, 2019 08:32 |
|
animist posted:the type-safety of python with the compile times of c++
|
# ¿ Nov 28, 2019 01:10 |
|
just use MACSYMA for your mathematical modeling and Symbolics Plexi for your neural network modeling (to run on an Array Processor boardset in your 3670 or a Connection Machine, of course)
|
# ¿ Nov 28, 2019 01:50 |
|
lancemantis posted:statisticians amirite? depends on your confidence interval
|
# ¿ Feb 1, 2020 07:21 |
|
Bloody posted:i love advanced undergraduate analysis of algorithms at some schools it’s like a weeder course for upperclassmen
|
# ¿ Mar 27, 2020 04:51 |
|
Schadenboner posted:thispopactdoesnotexist.com if you could both generate the look and the lyrics and the music and put it all together this could be awesome
|
# ¿ Jun 13, 2020 18:54 |
|
“this white person probably exists and is awful”
|
# ¿ Jun 22, 2020 02:46 |
|
it wouldn’t surprise me if it actually did full facial recognition, and checked whether he had the necessary paperwork filed to appear on her stream it’s not like China isn’t actively maintaining a detailed profile on everyone—online and offline—and integrating it all with their bureaucracy and their technical infrastructure you know, like Facebook
|
# ¿ Jul 9, 2020 13:47 |
|
Carthag Tuek posted:youre the ones putting a gender identity on my email in the first place!!! X.400 user spotted
|
# ¿ Jul 29, 2020 09:30 |
|
what we really need is a machine learning model that can be used to distinguish human posts from machine-generated posts
|
# ¿ Aug 6, 2020 01:06 |
|
BrokenGameboy posted:Since the thread seems to mostly be talking about industry, what's ml been like in academia? Genuinely interested. academia doesn’t do machine learning, that’s an industry thing academia does artificial intelligence
|
# ¿ Aug 8, 2020 09:36 |
|
|
# ¿ Mar 29, 2024 13:31 |
|
true “self-driving cars” require solving artificial general intelligence, they’re not something you can develop algorithmically and machine learning isn’t a scam, it just isn’t what it’s sold to lay people as: it’s black-box pattern recognition systems implemented via neural networks, not “intelligence” of any sort like you can train a neural net to play Super Mario Bros. but it won’t be able to play a level it’s never seen before without a whole ton of extra work that could be to increase the size of the network by several orders of magnitude and doing tons of randomized training for both positive and negative feedback so it’s more likely to not just randomly flake out on a new situation or it could be by making the recondition models just one aspect of a system that’s under control of some kind of “executive” that can make higher level goal-oriented decisions while integrating new environmental information either way the important part is that even most recognition models are temperamental and we don’t always get why they work, what cues they’re going off and so on, and the state of the art in implementing “executive function” requires an enormous amount of compute power and isn’t very usable for arbitrary tasks yet either
|
# ¿ Mar 31, 2021 05:54 |