Is Artificial Intelligence Taking Over?
AI has certainly made waves recently, but is it here to replace us or make our lives easier? A Drake University professor weighs in.
Whether AI is "taking over" is a complex question with no simple answer. It depends on how you define "taking over" and what aspects of life you're considering. Here are some different perspectives…
Ok, full disclosure, I didn’t write that first paragraph. AI did. More specifically, Bard, Google’s experimental chat service did. When prompted with “Is AI taking over?” this is what it spit out. Thanks, Bard!
I could have used other chatbots — ChatGPT, Bing AI (now Copilot), etc., but Bard tends to be pretty low key and easy to use. It didn’t disappoint. Much of the advice it gave was pretty spot on.
“AI is increasingly automating tasks in many fields…”
“AI is still far from being able to take over the world…”
“The future of AI is uncertain.”
All of this was echoed by Chris Snider, an associate professor in the School of Journalism and Mass Communication at Drake University. Snider has been teaching since 2010 and knows a thing or two about emerging media. In addition to talking social and digital media strategies, Snider has been keeping students updated on generative AI, also known as creative AI—artificial intelligence that focuses on creating content, powered by complex algorithms and trained on massive amounts of data.
I spoke with Snider about what he’s seeing in the classroom, in businesses and what the general consensus is about AI taking over our jobs (Spoiler alert: it’s not as dire as you’d think.)
Q. When did you really start to hear talk about AI?
A. I’ve been teaching AI for quite some time, and I teach a Digital Media Strategies class where I kind of force my students to think long term for what technology is coming using a couple of different sources. One of these sources has been talking about AI for quite a while and it just all felt so futuristic for the most part. It was right around this time last year when ChatGPT came out, when we could really get our hands on and touch and feel and work with it in class. It started to mean a lot more to our students at that point.
Q. What's been the reaction from students?
A. To me, the consensus has been, "We're afraid of this," and maybe not afraid in the way that some people are afraid of AI. I think they've been told that it's a bad thing so they avoid it. I tell students, "Look, we need to do some things in this class that are outside of what this class is about. My students have to build websites and it's not a photography class. I'm like, "You can use AI to create images because this is a class about building websites, not about creating images. You can use AI to do some of the writing in this class because this is not a class about writing," and they still don't do it. Maybe two out of 20 students actually used it, even when I gave them permission to.
I gave my freshman students a survey at the beginning of the school year, and I added a question this year, "How do you feel about artificial intelligence in general?" I also ask, "How do you feel about technology in general?" Scores are super high. "How do you feel about social media in general?" Scores are super high. "How do you feel about artificial intelligence in general?" Scores are really low. They don't like it. This idea that college students like change and new things is not the truth. They are staying away from these things because maybe they feel like it's a little bit of a trap if I'm telling them to go and use them.
Q. What's Drake's policy in terms of using AI in classes?
A. Drake overall doesn't have a policy, but we did write a policy for the School of Journalism this year saying that we think these are important skills for our students to use. We're going to talk about them and teach how they can be used in individual classes on assignments. We're saying we want our students to have the skills to use generative AI the right way. It's our policy in the School of Journalism.
Q. How should students be using AI?
A. I work with Chris Porter, who's the head of Drake's artificial intelligence program, and we address a lot of this together. We saw a lot of businesses wrestling with this question so we built this framework about…first, start with how much humans should be involved in this thing you're doing. How much does it matter that a human is involved in what you're doing? If you're sending a heartfelt message about laying people off, then we want probably 100% humans to be involved in something like that. But if you're just writing a little blurb to summarize something that's going to be shared in the email newsletter, maybe we don't care so much if humans were involved and so more AI can be involved there. Now, I can determine how much I can be involved.
Students are different because students need to show that they can do something and, in many cases, students need to show again and again and again that they've mastered something before they turn it over to an AI. So, my approach at the college level is if one of the outcomes of the course is to show that you can do something yourself, then you have to show you can do something yourself. But if there's something that we're doing that doesn't fit the outcomes of the course, then you can use AI for that. The younger the student, the less opportunity for it to even make sense to introduce it. You've got to show you're a master of something before you can move on to automating it in the world of education.
Q. How should AI work? It's obviously growing and getting better. Could it put someone out of a job?
A. We're already seeing that it can replace elements of a job. And in many cases, it's kind of like things that maybe you weren't hiring a person to do before. What I'm excited about is the ability to take some tasks and push them fully or partially over to AI to do for you, which frees more time for the more meaningful tasks that we do, the things that we put our true human spirit into and our uniqueness in. Let's say, I've got I've got a lot of stuff on my plate. If I can get AI to do some of these tasks that aren't as important to me, I can put a lot more time into the ones that are. That's a good thing. It pushes some of the mundane stuff we do and makes us more productive.
Q. So, there are upsides to it?
A. Yeah, just to be able to very quickly have the entire internet have your back on learning something or brainstorming something or coming up with ideas for something, I think is huge. So that's a positive. And then also just kind of freeing, like I said before, freeing up time to do those more meaningful things is important too.
Q. How do you reconcile the idea that what AI produces isn't your original work?
A. It's an ethics question, and that's where we've seen a lot of companies not willing to use these things because they don't know the answers to those questions yet. So that's kind of the unknown. What exactly was this information trained on and therefore, is it plagiarizing or not? But if you're not trying to pass something off as written by yourself, I think that's a big part of this. Don't try to pass it off as if you wrote it. Pass it off as being written by an AI.
The other thing I think about related to this is if I'm a writer, I too learned by doing lots of reading and copying other people's styles and stuff like that. To what degree was I doing that already? As a writer, you learn from other people's styles. I think that's something you've got to keep in mind, too. But it's a valid question. And that's why I'm not out writing books and then passing them off as "written by Chris Snider" and publishing them. I'm using this for when I get stuck on a paragraph I'm writing, "AI help me figure out where to go from here," and then maybe I rewrite what it comes up with. We're still at least thinking about what is important for us and what's not important in terms of where we're going to use this.
Q. Do you see products like ChatGPT and Bard just becoming another tool in the toolbox, similar to how we use Google or the internet or is there something else darker going on?
A. I do think it's going to be another tool that helps us be more productive in all the things we do, just like we can look something up in Google or we need an image for a presentation. It used to be, "Let's Google and find an image." Well, now let's just describe the image we need and let an AI create it for us. We're seeing it come into a lot of these work programs like Google Docs even Gmail has these little buttons to help you write what you want or help you make it more formal or less formal. I think more and more people are just going to kind of get used to hitting those buttons and using this as part of their work.
Q. For someone who hasn't played with AI at all to see what it can do, what would you recommend? How do they get started?
A. I would recommend starting with ChatGPT because that's kind of the one that everyone talks about. Create an account on ChatGPT and just ask it to do something, like take an article and summarize it. Maybe ask it to summarize it so it makes sense to a 5-year-old—just get used to some of these kinds of productivity things that it can do and then ask it to write some content for you, or to brainstorm some ideas for you or whatever to get a sense of what it can do. Don't feel like because you try something in this means that you've now let the AI overlords take over. You're just trying to test it out to see what it can do. The more you get in there and see what it can do, the more you're going to realize those things it can do for you.
Q. What if I want to work with images in AI?
A. A lot of people are most familiar with kind of the text side of things, but jumping in and trying out some of these image creation tools is a really good thing to do, too. You can use Midjourney or Dall-E or go into Bing's tool is the best way because it kind of has it all built-in. It's called Copilot now. Try some of these image creation tools to see if you can find images to put into presentations, whatever you're working on.
HOLID-AI — A holiday-themed Introduction to Generative AI
If you’re interested in learning more about AI, Snider, along with professor Chris Porter, is hosting a holiday workshop—HOLID-AI—on Friday, Dec. 15, from 12-12:45 p.m. CST.
Attendees will learn about practical uses of generative AI tools such as ChatGPT, Bard, Midjourney, D-ID and more—all with a holiday twist. Visit Eventbrite to sign up.
This virtual event is free and open to the public, so put on your best ugly holiday sweater and join the fun!
The Iowa Writers’ Collaborative
I’m proud to be a member of the Iowa Writers’ Collaborative. We are a group of professional writers producing columns on the Substack platform of interest to an Iowa audience. For a weekly roundup of all the columns in the Collaborative, subscribe to the Iowa Writers’ Collaborative, Roundup.
I was subbing in a high school ESL classroom the other day and a particularly obnoxious kid said he was done with his book report moments after it was assigned. To prove it, he showed me this very elegantly worded report with very sophisticated ideas and interpretations, that I am sure this kid was capable of thinking through, but not writing down that quickly or elegantly. Maybe if the assignment had then been to analyze his own paper, that could have been useful for him, but it wasn’t. He clearly was just using AI to cheat. Luckily I’m just a sub and could let his teacher have that conversation.
Although AI sounds good for many applications, what troubles me is the lack of concience to the decision making process. I'll give a instance so I can make this clear, the military talked of creating robotic medics to go after wounded on the field and retrive them. If you are the military, how big a jump is this to making unending war a possibility? Simply put, if you can build a medic that can risk nothing and do the job of fetching wounded off the battlefield, how much of a jump is it to make soldier automatons? Once you replace the flesh and blood concienceness and concern for fighting war in the first place, what will stop continual war? It didn't take long for the military to change course on women in the combat arms when the number of available men to fill the military ranks droped! Having a ready supply of robotic fighting "men" also allows for even more poor proformance by the leadership within the military. No one really dies or gets hurt, other than the tax payer who is left to pay for their excesses! It has never been a problem for the military in the past, it certainly wouldn't be a concern for them in the future! Robots are the perfect soldier, they don't think of self preservation, they will act on orders no matter how stupid they might be, and likely will not function based on concience and concern for others, what we call "heros" today. War really would become a"game" that only ends when the world is completely depleated of resources to build more robots!