Maybe I’m naive, or was conveniently overlooking the obvious, but when ChatGPT starting showing ads to me yesterday, it ruined the experience. I’ve found myself using ChatGPT quite a bit over the past year, and would be pleasantly surprised and encouraged when I would ask questions of a personal nature that found me receiving what seemed like neutral and intelligent suggestions and feedback. I certainly had an awareness that by sharing any personal information, ChatGPT was building a profile on me and using past exchanges in order to better respond for future exchanges. For example, ChatGPT knows my general age range and some of my interests and that I am considering real estate and future retirement ideas. Yesterday I was asking about land for sale in the southwest, and I was getting insight about the property that I couldn’t have gotten without flying out there to see it firsthand. But then ChatGPT was taking it further and asking for prompts to help me determine what exactly were my motivations or what I hoped to attain if I were to buy the property, and gave me insight on how people respond to being in the middle of the desert where there are no towns and not many people nearby.
To be honest, I found ChatGPT’s questions and the scenarios being laid out for consideration to be quite fascinating and perhaps beyond what I would have considered without actually being there – for example, how would I feel about seeing no people, buildings, or signs of civilization for several days? Was I aware how little water was out there and that nearby outposts don’t have much luck with digging wells? Was I aware that some people feel uneasy as the sun goes down and there’s no perception of what’s out there other than what one can hear?
So I’m getting all this feedback and feeling like I’m having a conversation with someone who is pushing me to think deeply, and then it hits: an advertisement for B&H Photo, suggesting that I would likely be wanting to take pictures of my surroundings if I’m out there in the desert in a new environment. What the hell?
My fabricated image above reflects this feeling of trust evaporating with the advisor’s clear motivation to sell something or make money from my conversation. Imagine a person or patient talking to Sigmund Freud about how driving his mother (since it was always about the mother with Sigmund Freud) caused him stress and anxiety, and then Freud suggests a brand of spark plugs to make the automobile travel go smoother? I’m being a bit extreme in my example, and hopefully getting a laugh, because I certainly don’t equate ChatGPT with a professional therapist, but I do find it very unappealing now if there’s going to be ads interjected into the conversation.
I actually asked ChatGPT about it: “I’m very uncomfortable with the fact that you’re now showing ads.” ChatGPT replied: “I can’t view the app’s interface from my side. If you’re seeing a separately labeled sponsored item below my reply, that’s an ad shown by the platform and it’s separate from my message – I don’t control or insert those ads.”
What kind of B.S. is that? As if ChatGPT is trying to humanize itself and distance itself from what is happening! C’mon, total B.S. And I should know better – if I’m talking about cars, an ad for spark plugs or motor oil isn’t out of the question. But it suddenly shifts my feeling from “artificial intelligence” to just another software program that harvests my personal information in order to sell me something or to make the creator some money. I know there’s huge costs involved in creating A.I. and chatbots, and nothing is free – but this really shifts my feeling towards the product. It’s like the old saying (usually referenced around Facebook), that if you’re not paying for the product, you are the product. The company with the “free product” is monetizing my data, attention, and behavior to sell to advertisers or to force subscription fees. I’ve heard that even subscribers to elevated ChatGPT plans are being offered ads.
It was just one ad, but it opens up the feeling I have about most websites now – there’s SO MANY ads, pop-ups, pop-unders, and links to ads that the actual content I’m trying to read is probably one-tenth of what I see on the screen. How long until ChatGPT is the same, littered with ads?
OpenAI states that advertisers don’t have access to user chat history or personal details … am I supposed to take their word on this? In my experience yesterday, it was clear that serving up the B&H Photo ad was not just some random ad but was certainly based on past discussions with ChatGPT about all the photos I take. So if they “don’t have access to user chat history or personal details,” then how did they know I like photography?
ChatGPT, if you’re out there reading this, please be clear that you’ve just lost my trust, and most likely my interest in using your product even a fraction of the amount I used to do. Not to mention, even being the publisher of a website and creating these posts with my own thoughts and words, ChatGPT and other A.I. platforms are just scraping (and stealing) this content to integrate into their databases. It’s hard to know what the trade-off is here.