Oh that was FAST fast.

GPT-2 output detectors work for GPT-3 and GPTChat, which means people who do stuff like turnitin plagiarism checks have a tool for this fast-moving new frontier.

Personally, I do a choose-your-own adventure/create-your-own-assignment model for most of my non-intro classes, at this point, and frankly i'd be inclined to make this into an assignment, in its own right. It could look something like:

"Generate a GPTChat output on [topic(s)], then expand on and correct the output with specific references and citations from class readings and lectures. Turn in the prompt, the original, and your corrections as your full submission."

Reframe it like that and it helps them think about what the GPT platforms are and do, and then you can integrate that into the whole of the semester's work, rather than making it an arms race of plagiarism and "Gotcha" tools.

wandering.shop/@janellecshane/

Every wild swinging hot take on the "AI" art and GPT situation goes fucking everywhere, while the few nuanced takes I've seen struggle to make it around.

This shit's neither harmless nor inevitable, AND doesn't have to be made in the harmful ways corporations will tend to making them. Algorithmic applications are primarily created from within and in service of hegemonic, carceral, capitalist profit motives, meaning they act as force multipliers/accelerants on the worst depredations of said. That goes for art and language as much as it goes for housing and policing.

Neither tech nor play are neutral categories and these tools COULD be used to help a lot of people. But until we get the capitalist shit off of them and place meaningful regulation—that is regs specifically designed to safeguard the most marginalized and vulnerable— around the frames they're going to keep doing a lot of harm, too.

LLM's are trained on data that is both unethically sourced AND prejudicially biased in its content and they operate by means of structures that require vast amounts of natural resources. But they COULD and SHOULD be made differently

"AI" art tools can &do help people who either never could or can't any longer do art in more traditional modes create things they feel meaningfully close to. But they're also being trained on pieces by living artists scraped without their knowledge, let alone their consent or credit

I'll say it again (said it before here sinews.siam.org/Details-Page/t): The public domain exists, and it would have been a truly trivial matter for the people who created "AI" art tools to train them and update them on public domain works— they update literally every year. But that's not what we have, here.

GPT checkers are apparently already being deployed to try to catch GPT cheaters, but, again (twitter.com/Wolven/status/1599): Why be in an adversarial stance with your students when you could use the thing to actually, y'know, teach them how to be critical of the thing? Additionally the GPT Chat checkers seem have problems with original text written by neurodivergent individuals (kolektiva.social/@FractalEcho/) and other text in general (aiweirdness.com/writing-like-a), so like many automated plagiarism checkers and online proctoring softwares their deployment directly endangers disabled and otherwise marginalized students in our classes.

Uncritical use of either GPT or "AI" art tools, or their current proposed remedies, does real harm, because the tools are built out of and play into extant harmful structures of exploitation and marginalization. But we these things can be engaged and built drastically differently.

But in order to get the regulations and strictures in place to ensure that they ARE built differently, we have to be fully honest and nuanced about what these "AI" systems are and what they do, and then we have to push very hard AGAINST the horrible shit they're built in and of.

Show thread
Follow

The chat gpt assignment i proposed at the top of this thread looks like this, this semester:

So it seems like a lot of people don't know about the "Choose Your Own Adventure" assignment model. I use two different variations, and they're both pretty straightforward, actually:

a) You create a grading model with a set amount of points or a full 100 percentage calculation, then you create a range of potential assignments which can be chosen from and combined to reach those points/that percentage.

Variation (b) is the "Study guide" model wherein you give the students the instruction to complete a creative project which would help them study AND help them if they needed to communicate the material to other people; then you leave the framework COMPLETELY open to them, and let them give you what they got.

You can combine these by folding a "Create Your Own" option into variation (a).

I've gotten D&D campaigns, tarot decks, books of poetry, all kinds of stuff. A lot of people fall back on pre-made game models you can find online, but last semester I even had students write webpages and code up apps and executable scripts (yes i read the code, before attempting to run it).

And if you write the prompts for Variant (a) correctly, then even if the students don't choose the options, they'll learn something from them.

Show thread

Honestly even if you just give them a view into the sheer range of possibilities for Variant (b), then the students can learn something from the prompts and the assignments, no matter what.

Show thread

(I learned a variant of this model from Ashley Shew, roughly 5 and a half years ago now, and I've loved it ever since.)

Show thread

Due to gross ethical mismanagement by openAI, I'm removing this experimentation option from the syllabus, and I'll be replacing it with something else. Details TK.
ourislandgeorgia.net/@Wolven/1

Show thread

This gross ethical mismanagement by OpenAI right here: ourislandgeorgia.net/@Wolven/1

Add this in with microsoft announcing their "increased investment in partnership" with OpenAI literally days after MSFT fired roughly 11,000 people— a move MSFT specifically said they were doing that as a means to cut costs to save revenue to funnel into "AI" research; so that is just exactly what this *is*.

MSFT fired a BUNCH of their in-house "AI" people— among thousands of others— because a "partnership" where they farm out applications and capabilities research to OpenAI was cheaper and less time- and resource-intensive.

Put all of those moves together, and you have the recipe for some really bad shit about to go down and I don't want my students contributing to the refinement data and models of companies that do and and condone shit like that.

Also? I don't want to hear any more about anyone's skynet/black mirror/i, robot fantasy terrors because those ONCE AGAIN miss the much more crucial "MASSIVE MULTIBILLION-$ CORP which literally JUST demonstrated how little it cares about humans is buddying up with a grossly exploitative 'AI' firm" angle.

Show thread

Honestly having a LOT of trouble coming up with an alternative ChatGPT assignment to the one I created before the Kenyan worker exploitation story broke.

Like, I Really don't want to have my students working with OpenAI, pretty much at all, but also want to help them learn how to think through and with this stuff. It's a real conundrum.

Show thread

Like… I was able to countenance the compute costs of having at most ~40 students run ChatGPT for one lesson module— I mean I wasn't super jazzed about it, but I could swallow it for the sake of running the experiment and crafting the lesson…

But the fact that having the students do so would be directly contributing to the harmful and exploitative content moderation practices of OpenAI itself, RIGHT as it's partnering up with microsoft to expand the distribution and reach of their tools and paradigms of engagement?

Nope, can't do it.

Show thread

So as mentioned above, I removed my original ChatGPT experimentation option from my syllabi due to OpenAI's shit, and I'd been having some trouble coming up with something to replace it.

Well I finally got somewhere with a little brainstorming help, and the updated assignments look like this.

Again: These are not REQUIRED assignments, they are OPTIONS in a Choose Your Own Adventure assignment model (also see above).

Show thread

@Wolven really interested to hear about how that works out.

@Wolven I’m reminded of this by @egoldman , ages and ages ago: personal.ericgoldman.org/offer - wonder if any interesting comparisons.

@Wolven
Oh, damn!!!!
That is an absolutely EXCELLENT assignment.
Almost makes me want to do it. Unfortunately, I don't have any info from your class lectures.
I'm retired now, but will definitely steal this idea & pass it along to former colleagues that are still in the classroom.

@Wolven Thank you for this. I wish I'd thought of it but I'm glad you did. Please share with us any follow-ups.

@Wolven very clever. I hope it creates good experiences for those students to get into the nuances very directly.

@Wolven Terrific stuff! I suspect that I will soon have reason to reference this (Friday, actually).

@Wolven this is what I did when evaluating it as a new tool to augment the creative process. As an engineer I wanted to discover the limits of the system and understated how to use it effectively.

@Wolven I wish I had more professors like you when I was in college. This assignment is a great way to force critical thinking about what you are trying to teach and help students broaden their perspectives on the topic.

@Wolven the first smart, thoughtful take on ways to work with the new tech instead of raising hands and scream.

@Wolven

Clever, the only problem is you are forcing your students to give and verify their phone numbers, unless you can provide them with access some other way.

@FuckElon it's not a mandatory assignment; it isone option among many in a choose your own adventure model.

@Wolven I really like that this teaches editing, revising, and checking details and sources

@Wolven I'm asking mine to ChatGPT-then-rewrite an infosec incident report (for a real-world incident).

I expect to see many hilariously horrifying factual errors.

@Wolven This is great. I had thought about doing something similar, but after discussions in our dept I was dissuaded because not all of our students might have access to ChatGPT (e.g. because of the waiting list or because of capacity issues). How do you deal with this? Thanks.

@tnhh for my part, i make it an optional assignment as a choice among many, and i eord the question such that even if they can't do the assignment itself, they're thinking carefully about the things involved in it

@Wolven That sounds great. Unfortunately we aren't allowed to offer a choice of assignments at our institution - I have asked in the past but was denied.

@Wolven don't fear the machines. Fear the people who own the machines.

@Nothingsmonstrd Yes, and fear the incentive structure that owns the people who own the machines. @Wolven

@Wolven So, like, industrial capitalism again. Oh it could all be so different this time. But no. Exploitation, extraction, profit.

@Wolven BLOOM entailment? You might want to reach out to huggingface people, it will be less of an question answering, but more of an entailment. Really good to understand how such models work and BLOOM team gave good thought to ethics beforehand.

@Wolven the first year I did debate the topic was on whether to teach media literacy in schools. I didn't realize how many people didn't know how to watch TV critically (I'm not sure where I learned it but I had the skill by high school). It comes to mind for your situation because, if they are going to use this system anyway, where are they going to learn the ropes? Certainly the systems themselves don't encourage people to look at it with a critical eye.

I tried out one of them as essentially a search engine for one of my role playing games. A friend suggested it because he thought it was cool that it even knew the details of this game. I wasn't shocked by its knowledge of the game. I was shocked by the longest response I got being paragraphs of BS covering for it not knowing the real answer. I didn't expect it to work so hard to lie to me.

If you're not gonna use it I sorta want to put the ideas from the question as an exercise for people I know and work with that are starting to.

@Wolven
I fortunately don’t have to wrestle with this question this semester, but I will be wrestling with it. One thing on my mind: is it more harmful to participate now, or to send out students who are going to participate uncritically in the future?

@Wolven it’s probably naive to believe anything a large corporation says publicly , the layoffs in particular are not likely to be tied to a pre existing investment in openai but rather imho more likely to be a coordinated layoff campaign designed to keep salaries low . So any worker activity I think is more likely than “investing in ai” as a credible reason …

@tonic Layoffs were happening around the sector as a means to safeguard profits, long before msft announced their openai partnership, but the movement and automation potential*of* openai's tools made those layoffs easier to sell and justify.

The "pivot to ai" was always on msft's docket (you can look at their research funding and publications over the past 5 years to see that), but openai's rising star and the broader tech sector layoffs gave them good cover to fire teams they likely wanted to fire anyway.

"Naïve." Mm. Have a good one.

@Wolven

5% more of this system-wide would leave us 100% better off. Something like that, with error bars.

@Wolven Both variations sound intriguing, and I love your takes on engaging with AI models. Would you be able to share an example of each variation, just to help a non-academic like me see it in context?

Sign in to participate in the conversation
Mastodon

We come here in search of a place to express our thoughts outside of the direct control and surveillance of unaccountable, mega-corporations. There is no common theme that binds us other than these being the bonds we've chosen rather than those that have been chosen for us.