How Artificial Intelligence will shape the future

Jacob Kostuchowski
Reno Tahoe Business Report
5 min readMay 2, 2023

--

Jacob Kostuchowski looks into how artificial intelligence is used, along with the positives and negatives it can bring with it.

Open laptop on a desk displaying ChatGPT. Photo by Hatice Baran, courtesy of pexels.com

Within the last year alone we have seen huge leaps into what artificial intelligence is capable of. Whether it be OpenAI with ChatGPT and DALL-E, Midjourney, or the plethora of other A.I. software, it is clear that this technology is here to stay. So, what can it do for us? Good, bad, or ugly.

What type of A.I. tech is out there right now?

What does ChatGPT do? This is the question I had inputted into the language software. The response it delivered is below.

“As ChatGPT, my primary function is to assist users in generating human-like text-based responses to their questions or inputs. I do this by utilizing a large language model that has been trained on massive amounts of text data to learn how to predict and generate text that is coherent and relevant to a given input.”

In short, ChatGPT is a program that allows the user to ask a question and it will retrieve an answer for said user. This technology only works though by scraping the internet for data to build its responses.

Obviously OpenAI is not the only company that is creating software like this. Google has their own form called Google Bard, which works very similar to ChatGPT.

That is just text-based generation though, A.I. has also moved into the realm of image generation. The two largest being Midjourney and OpenAI’s DALL-E, both function in a similar way to text based generation. It is just that in the case of image generation it scrapes the internet for art and images, and that has led to quite a bit of controversy.

Controversies with A.I.

One of the largest critiques that has been thrown at this technology is that it is essentially plagiarizing artist’s creative work. With its technique of scrubbing the internet for images, it is taking many artist’s work without their consent and using their work to create new images.

The website DeviantArt, which is a platform many artists use to share their work, went through this very controversy with their addition called DreamUp. This addition was the platform’s own A.I. art generator and where the issue layed was in the fact the company automatically opted all users into their work being shared without letting them know.

Users were furious with this decision because it took away the agency of them to remove their work from this learning technology. DeviantArt as well as Stability AI and Midjourney are in a class action lawsuit for the alleged plagiarism of art.

Nicholas Ward, a local photographer, said that the use of A.I. technology in this way is detrimental to the artist.

“A.I. takes the power away from the artist, so then it discredits artists as a whole,” Ward stated.

Another major controversy that has been laid out in this sphere is not necessarily about the technology itself, but how the company has developed it. OpenAI has been catching quite a lot of heat for underpaying their workers in Kenya to do content moderation.

It seems that this artificial intelligence is not as artificial as it seems because it still relies on humans to filter and label what is hateful content so that the generator does not produce any content with hate speech. That is where the Kenyan workers come into play.

OpenAI has outsourced this labeling to a company called Sama which paid its workers in Kenya only $2 an hour to sort through the darkest content the internet has to offer. Content such as murder, child sexual abuse, and torture just to name a few. Many argued that these were inhumane conditions for workers to go through for such little pay.

And finally a controversy that most people think of when it comes to text-generative A.I., using it to cheat in school.

Many teachers, professors and parents alike are worried that students will use this technology to essentially do their homework and assignments for them. And clearly this worry has been seen by plagiarism detection services. Companies like Turnitin have already begun developing A.I. detection technology to combat the use of A.I. in academic settings.

How it is already being used

Even though this technology feels so new, companies have already begun to incorporate artificial intelligence generators into their workflows.

One of the largest companies in the world, Microsoft, has invested over $13 billion into OpenAI. The company has started to incorporate OpenAI’s technology into sales and marketing software, GitHub coding tools, the Microsoft 365 productivity bundle and Azure cloud. They have even integrated it into the Bing search engine.

It is not just tech companies that are using this technology in a professional setting, even newsrooms are testing out the integration of A.I. into their workflow.

Insider is beginning tests to figure out how to incorporate A.I. into their newsroom. They are creating a specific test group who will try to use A.I. written text in their own articles. While the rest of the newsroom is being encouraged to use the resource for crafting headlines optimized for search engines, fixing typos, and generating outlines.

This is a trend that other newsrooms are likely to follow suit in. It’s as Insider’s editor in chief, Nicholas Carlson, said in an interview with Axios, “A tsunami is coming, we can either ride it or get wiped out by it.”

How students feel

With all this being said, college students are a group of people who this technology will affect the most. Whether it be the loss of entry level positions or the loss of human connection it seems like they are worried.

Adelynn Puett, a student at the University of Nevada, Reno, is more concerned about the human aspect of this and how it will affect how we interact.

“I think overall, it can kind of push against human development in terms of communication, because I feel like a lot of what A.I. does is take information that’s already been done and rehash it to people. And that’s kind of the A.I. of it.” Puett continued, “But I just feel like it’s almost detrimental to humans and their ability to communicate with each other.”

Damien Vinci, another student at the university, said that something needs to be done on a regulatory level for this kind of technology to be safe.

“I think that this needs to be regulated better. And we don’t have the people in power here in America who can make informed decisions on this stuff,” Vinci said. “These people who are in power and making these decisions here in America are so out of touch that we have to figure out something or this could become extremely problematic.”

While artificial intelligence can be a significant tool to be used for a multitude of reasons, it at this point should be viewed with at least some air of caution. But, it is here and it is here to stay, so it is up to the consumer what to do with it.

--

--

Jacob Kostuchowski
Reno Tahoe Business Report

Undergrad journalist attending the Reynold’s School of Journalism at The University of Nevada, Reno.