ChatGPT - The Good, The Bad And The Ugly, A Developer’s Perspective

The success of OpenAI’s ChatGPT product (gpt-3.5-turbo) has dominated the recent conversation around Artificial Intelligence (AI). This miraculous movement of electrons is changing the way the entire workforce operates and thinks about its future.

ChatGPT is built on top of a Large Language Model (LLM). Essentially, the AI is trained on huge amounts of text from various sources. Like most of my peers, I had to take a look for myself to understand how this might be used in my work. I started by exploring the developer APIs and built an experimental application using the OpenAI API, along with some other tools for a simple database that would display a basic user interface.

During the build, it became clear to me why they call it ChatGPT (Generative Pre-trained Transformer). The AI algorithms are pre-trained using large amounts of text and when the user sends text content to the endpoint, the text is then processed using the pre-trained information. The algorithm selects the most likely series of words that follow, such as an answer to a question, or the next words in a conversation. This is an oversimplification but makes it easier to explain to non-technical colleagues (family). I’ve also heard it described as a super powerful auto-complete (and therefore not self-aware enough to create original ideas).

Using the gpt-3.5-turbo API, the design of a text prompt is called a chat completion. A completion is an array of data objects sent to the endpoint. A simple input would look like this:

[
    {role: “system”, content: “You are a junior frontend engineer, assisting with a Next.js project.”},
    {role: “user”, content: “What is the best way to secure an API in the pages folder?”},
    {role: “assistant”, content: “ChatGPT responses here…”}
]

When the input is received, electrons move, and the API returns the transformed array. “ChatGPT responses here…” is replaced with the AI’s response. For this particular prompt, the response is as follows:

“The best way to secure an API in the pages folder of a Next.js project varies depending on the level of security needed and the specific requirements of the project. Generally, there are a few factors to keep in mind: 1. Authentication: Require authentication for all API requests. This can be done using JWTs (JSON Web Tokens) or other authentication methods. 2. Authorization: Make sure that only authorized users are allowed to make requests to the API. This can be done by implementing role-based access control or other authorization mechanisms. 3. Rate Limiting: Implement rate limiting to prevent abuse of the API. This can be done using tools like rate-limiting middleware or using a third-party service. 4. Encryption: Make sure that any sensitive data transmitted between the client and the API is encrypted using SSL/TLS. 5. Validation: Validate all inputs to the API to prevent SQL injection, cross-site scripting attacks, and other security vulnerabilities. 6. Monitoring and Logging: Monitor the API for unusual activity and log requests and responses for auditing purposes. Overall, securing an API in the pages folder of a Next.js project requires careful consideration of the security requirements and the implementation of various security mechanisms to ensure the API is secure and protected against malicious attacks.”

The Good
The answer provided for this simple question is decidedly good and I would have likely answered with similar information. As a practitioner, I can verify the above information is correct. The tool performed the job successfully where I may have forgotten to mention a step in my answer. Human error accounts for 99.9999% of the world’s problems, so we should not expect a tool designed by humans to be infallible. Applying the tool to assist me in performing a function allows me to provide the best information for my purposes provided I verify the results and test it thoroughly.
 
The Bad
Many that don’t understand the technology attempt to use it in ways that don’t fit the product without adequate testing. I have personally noticed a flux of articles aimed at using ChatGPT to make money. For instance a boom in E-books on Amazon shows the willingness of authors to use this new technology to augment their own abilities.

There are pragmatic applications for the utilization of this tool to augment our work, but to fully replace the decision-making and creativity of humans is inadvisable. As much as ChatGPT appears to have human intelligence, it is only able to generate the most likely completion from pre-trained information. Knowing truth from fiction is a human job. Since the result is likely to sound credible, make sure you check the facts and use material only from trusted sources.

The Ugly
I recall a term used by my Introduction to IT professor that still resonates today: GIGO; Garbage In, Garbage Out. What gets put into the computer directly affects what comes out. The same applies to AI. If training included bias, misinformation, or poor security features, the results provided would be flawed or incorrect. One example of AI providing misinformation can be found here: OpenAI Sued for Libel After ChatGPT Allegedly Accuses Man of Embezzlement .

From a security perspective, the use of company confidential or proprietary information is also a cause for concern. When a user places confidential information in a product controlled by another company, the use, storage, viewing and sharing of the copied information is largely unknown. Google warns employees about using its own ChatBot.

Further, users have recently started using the predictive nature of Generative Pre-trained Transformers to bypass security measures, ie. ChatGPT can generate Windows Activation Keys.

In the event that these models are successfully trained to decrypt authentication tokens or other security keys, cybersecurity will be compromised in ways not yet fully understood.

Conclusion
The excitement around ChatGPT and its potential business applications have controlled the narrative while the ethical ramifications have yet to be fully explored. This is the beginning of a new era of technology and without intervention, could negatively influence the ethical framework of society. A recent survey by Salesforce shows 67% of IT leaders want AI for their business . In other words, this is coming fast, whether we’re ready for it or not.

So what does gpt-3.5-turbo think of itself I wondered?

“Should a user of ChatGPT use the tool to perform the daily activities of a human or rely solely on the information provided by the tool?”

The response, “No, a user of ChatGPT should not rely solely on the tool to perform daily activities of a human. ChatGPT is designed to provide responses to specific queries that a human might have, but it lacks the ability to fully engage in human-like activities such as decision-making, problem-solving, and empathizing. While ChatGPT can provide accurate and helpful information, it is not recommended for a user to rely solely on the information provided by the tool without verifying it through other sources. ChatGPT is an Artificial Intelligence language model that learns from its training data, which may contain inaccuracies, biases, and incomplete information. Therefore, it is always advisable to cross-check the information provided by ChatGPT with other reliable sources before making decisions or taking actions based on it.”

It is important to understand this transformative technology as many questions remain to be answered, including the legal liability for developers, new government regulations, and newly established industry standards.

OpenAI has done a fantastic job creating a user-friendly experience, placing constraints around the AI, training it on generally appropriate information, and overall, developing an exceptional product that has garnered household recognition. However, it may not be the best tool for all workloads, and not all AI is suited for your business.

That said, we’re working to find ways that ChatGPT can assist our clients in routine report writing while assuring the underlying data is linked to ground truth and processes with secure guardrails (see our Research Report Demo). Trust is the critical quality in our solutions, with or without AI. 

At Bintel, we augment AI with human analytics to enhance decision-making and provide insights verified by experts. Please share your own experiments and thoughts on the future of AI. 

Banner image generated by DALL-E

Topics: ai, artificial intelligence, technology, tech, data, engineering, software, programmer, developer, javascript, json, gpt, programming, coding, api, cybersecurity

Chris Dyal

Written by Chris Dyal

Security Analyst & Software Engineer at Bintel Inc