I have never been alive at a time when oil companies weren't actively colluding to end the world and they knew it. They've known since the 70s what they were doing to the world. They continue to lie about it. They continue to do the things they know will kill us all.
Every wild swinging hot take on the "AI" art and GPT situation goes fucking everywhere, while the few nuanced takes I've seen struggle to make it around.
This shit's neither harmless nor inevitable, AND doesn't have to be made in the harmful ways corporations will tend to making them. Algorithmic applications are primarily created from within and in service of hegemonic, carceral, capitalist profit motives, meaning they act as force multipliers/accelerants on the worst depredations of said. That goes for art and language as much as it goes for housing and policing.
Neither tech nor play are neutral categories and these tools COULD be used to help a lot of people. But until we get the capitalist shit off of them and place meaningful regulation—that is regs specifically designed to safeguard the most marginalized and vulnerable— around the frames they're going to keep doing a lot of harm, too.
LLM's are trained on data that is both unethically sourced AND prejudicially biased in its content and they operate by means of structures that require vast amounts of natural resources. But they COULD and SHOULD be made differently
"AI" art tools can &do help people who either never could or can't any longer do art in more traditional modes create things they feel meaningfully close to. But they're also being trained on pieces by living artists scraped without their knowledge, let alone their consent or credit
I'll say it again (said it before here https://sinews.siam.org/Details-Page/the-ethics-of-artificial-intelligence-generated-art): The public domain exists, and it would have been a truly trivial matter for the people who created "AI" art tools to train them and update them on public domain works— they update literally every year. But that's not what we have, here.
GPT checkers are apparently already being deployed to try to catch GPT cheaters, but, again (https://twitter.com/Wolven/status/1599987850405371904): Why be in an adversarial stance with your students when you could use the thing to actually, y'know, teach them how to be critical of the thing? Additionally the GPT Chat checkers seem have problems with original text written by neurodivergent individuals (https://kolektiva.social/@FractalEcho/109480097279253524) and other text in general (https://www.aiweirdness.com/writing-like-a-robot/), so like many automated plagiarism checkers and online proctoring softwares their deployment directly endangers disabled and otherwise marginalized students in our classes.
Uncritical use of either GPT or "AI" art tools, or their current proposed remedies, does real harm, because the tools are built out of and play into extant harmful structures of exploitation and marginalization. But we these things can be engaged and built drastically differently.
But in order to get the regulations and strictures in place to ensure that they ARE built differently, we have to be fully honest and nuanced about what these "AI" systems are and what they do, and then we have to push very hard AGAINST the horrible shit they're built in and of.
I need some time to workout a better way to express it, but lately I've been trying to put the recent "AI" developments into the context of Star Trek. In many TNG episodes they ask the computer to do things that sound simple, but were, at the time, extremely hard to ask computers to do.
The example that comes most to mind right now is when they were using the holodeck to investigate a murder. The investigator asked the computer to extrapolate all sorts of things about the scene. Other times they ask the computer to combine some math and make guesses about it.
There's a universe in which that's what these "AI" systems are. For the extrapolation it's a mix of image generation and code generation. For the math one it's mostly code generation. We're right now starting to see things good enough to take those prompts and do that. I just don't want to lose the forest for the trees on that front.
As others have pointed out in different ways, there are good outcomes and good usages for these systems. It's just that, although these systems would be amazing in Fully Automated Luxury Gay Space Communism, right now they're being developed as part of a system that doesn't care that it's wiping out human existence.
I think the last time I really thought about it high school level physics hasn't caught up to 1915 physics yet? For kids that don't do advanced placement it's something like high school math hasn't caught up to the end of the 19th century. Math for most basically stops around the same time physics does, but there are options at least.
"Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?”"
"Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?"
The example given of the tedium of long division reminded me of something that keeps coming up as I'm watching my kids go through school: We teach math with a strict adherence to certain skills that are likely useless. It was stated to me originally in the more thought provoking, "We should teach calculus in elementary school."
If how slowly math and science education is keeping up with the real world happens with writing and literature too, it's not going to be great.
Didn't these guys just have to use the government to enslave their workers? I feel like "cannot retain workers without forced government arbitration" should be a negative. Instead it's clear investors believe the government will always choose them over labor.
In this video they describe part of why we quit playing, but in a different context. It talks about how once people bought a game in Rock Band they weren't going to buy it again in Guitar Hero. This was even more true than what they said.
New versions of Rock Band wouldn't bring all of your songs. You couldn't convert your songs to new consoles. For a convention we once set up a Rock Band machine from zero and it was almost 1000$ just for the music. It wasn't just that we weren't willing to hop franchises, we weren't willing to pay for the same thing again in the same franchise.
When automation first showed up we said, "Robots are replacing people." This fallacy is at the heart of the conflict between where technology has been going and where capitalism has been going. Robots replaced jobs, not people. That we've equated the two is one of the greatest crimes perpetrated by capitalism.
That these things are in conflict has been obvious since at least the 19th century. Thinking critically about the conflicts between working and non-working classes of people it is clear that this has been obvious for substantially longer.
This is about AI Art, but also every technology that improved productivity. They all have the same thing in common: The wealthy extracted the increased value and convinced the workers they they didn't deserve the fruits of the productivity gain. There are good, moral reasons to dislike AI art, but in the context of art as a job, always remember that the real enemy are the owners and the AI art is just a tool they're wielding to hurt you.
The people who are the least important to the railways are the ones making the most money. I know it won't happen, but I hope every rail worker just doesn't show up tomorrow. Maybe not for the rest of the week. Maybe not until they are paid a fair portion. Hell, maybe not until they're paid a little extra than what they're really worth.
Fuck every one of these owners and fuck Congress treating these workers as if they're slaves to this job.
This is a fun overview of leveraged buy-outs and private equity: https://www.youtube.com/watch?v=z5PLEZiSZVw
He goes over some details, but this is one of those things that's obviously a scam on its face. If any of the numbers were real, why would anybody finance that debt? Nobody would because it's such a clear loss in almost every direction. The only way it makes sense is if people are being paid off.
I'm also unhappy to discover that Windows 11 has added a new layer of fuckery to make free software look bad. I tried to install Digikam and the exe wouldn't launch correctly. Every time I double-clicked on it a dialog would pop up saying it needed to search the Store for something. If you said "yes" the Store would simply fail to find something to handle the file extension.
The whole process made it look and feel like digikam was malware. The only way I got around it was to launch a shell and run the exe directly.
I try to remember these things whenever people tell me the problem with Linux is that there's not a smooth user experience. I haven't found an OS yet where I don't start by having to customize everything and launch into a terminal.
It's annoying how certain apps make really cool features that are so frustrating to use. Today I broke a rule I'd known for a long time not to click on any of the automatically generated things in Google Photos. This is because I really liked what it put together for me!
The problems are:
I want to be clear that I'm picking on Google today, but most interfaces across most operating systems are utter garbage these days. I remember for years and years that people complained about free software stuff because "the UI isn't good." Clearly that's not actually what people are evaluating their tools on.
I thought Dave Chappelle's SNL monologue was funny. Having said that, it was not a personal conversation between he and I. He was selling me an entertainment product. If I hadn't liked it and chose not to consume further entertainment products from him there is no grand, moral implication on his freedom of speech.
Thinking about the collapse of Twitter, I'm reminded that the old technology these things are all trying to supersede is still around: the email.
In the end, all of these things could be implemented as a specialized UI on top of email. There are a lot of good reasons not to do it that way, but I wanted to make an entirely distributed social media system today I could implement as a fancy interface to email.
There are a bunch of competitors in the social media space, but one people haven't really been talking about is the humble mailing list. The UI kinda sucks, but it has survived many transformations of technology. This is mostly because email was built on a solid, federated foundation and many of the competitors start with, a variant of, "What if we built a version of email where the users have no control?"
None of these represent complete thoughts. It's just been on my mind a lot lately from a technical perspective.
Here's a hot take of mine that comes to mind every time I hear about Kanye and Adidas. He could've said a lot of different things and ended with, "and Adidas can't drop me," which would've resulted in Adidas dropping him. It could have literally been gobbledygook and there might have been an executive that went, "Fuck you. We can't drop you? Who the fuck do you think you are."
I don't know if that's how it happened, but I get annoyed on Adidas' behalf when I hear that challenge repeated.
We come here in search of a place to express our thoughts outside of the direct control and surveillance of unaccountable, mega-corporations. There is no common theme that binds us other than these being the bonds we've chosen rather than those that have been chosen for us.