Ever wondered who would stand atop in a battle between ChatGPT vs Google Bard vs Bing AI?
ChatGPT has taken the world by storm, with over 100 million users as of February 2023, but ever since the introduction of Google Bard and Bing AI, the long reign of Open AI’s ChatGPT is under threat.
Our curious team at Demandsage set out to find the answer to this and tested the three chatbots in this comprehensive comparison. We tested them against several parameters and assessed their abilities to respond to them with a rating.
Let’s explore who is the better one of them all!
ChatGPT vs Google Bard vs Bing AI (Comparison Table)
We tested them on numerous parameters, which include their ability to converse, research, write creative content, and more!
The ratings used below are based on our experience and are not official. We rated these three AI tools as per the extent to which they resolved a particular prompt.
Parameters | Google Bard | ChatGPT | Bing AI |
Language Model | LaMDA and PaLM 2 | Large Language Model | Prometheus Model + ChatGPT |
Ease of Use | 4/5 | 4/5 | 3.5/5 |
Pricing | Free | Free & Paid option | Free |
Languages Supported | English, Japanese, Korean | Almost 26 languages | English, Japanese, French, German, and Mandarin |
Privacy & Data Handling. | Uses conversational data if allowed | Uses conversational data if allowed | Uses conversational data if allowed |
Please note: Regarding accuracy, all platforms are constantly learning and may come up with inaccurate information times.
Our Quick Verdict
All three tools serve different purposes. If you want a quick verdict, here is our bottom line for why you should choose each AI chatbot:
Use ChatGPT for
- Creative short-form content
- Its mathematical problem-solving abilities
- Impressive coding abilities
- Highly functional plug-ins
Use Google Bard for
- Brief summary of events or topics
- Real-time information
- Listening to the generated responses instead of reading
- Precise long-form content
Use Bing AI for
- Three different conversation modes – Creative, Balanced and Precise
- Having a human-like conversation
- Quirky, personalized, and visual responses (Bing AI uses emojis)
- Sources of information of the generated responses
ChatGPT vs Google Bard vs Bing AI: Overview
Before we get started, let’s try to learn more about these AI Chatbots, how they were created, and the people behind them:
ChatGPT:
Launched by Open AI and proudly funded by giants like Elon Musk and Bill Gates, it is one of the initial names in Artificial Intelligence. Simply put, it is a super-smart assistant with all the capabilities to solve any query!
Free at first, ChatGPT is now also available in a paid version and is upgraded to a ChatGPT 4 model. It gives you more abilities to use a web browser to get live information and much more!
Google Bard:
Launched in March 2023, Bard is the answer to Open AI from Google. Google intends to integrate Bard into its search engine across all platforms and transform ‘searching.’
Bard uses freshly fetched online information to respond to queries from users, as claimed by Google. The answers delivered by Google Bard are reported to be of high quality and precise as compared to ChatGPT.
It has gained tremendous traffic over the past months, and presently, there are more than 140.6 million monthly visitors on the platform.
Bing AI:
Bing AI is Microsoft’s baby and one of the possible contenders for Google as it joined hands with Open AI’s ChatGPT.
It works on Microsoft’s Edge browser and uses Bing as its search engine. Like Google, it also intends to merge Bing Search and its chatbot.
How Did we Compare These Trending AI Chatbots?
I thought of testing them for their main purpose, which is to be the best assistant, how they behave when thrown a complex prompt, and how careful it is in their responses.
The output I looked for in all the below parameters was how each of them interpreted what I wanted and if I got a satisfactory response. I tested them on:
- Conversation
- Accuracy
- Summarising long text
- Brainstorming
- Ethical reasoning
- Content simplifying
- Long form content writing
- Short form content writing
- Real-time question
- Problem solving capabilities
- Creative writing
- Translate
- Falsifying statements
- Down-time frequency
Testing Google Bard vs ChatGpt vs Bing AI: 15 Parameters!
Let’s go ahead and test all three below; for each AI chatbot, we asked the same prompt and rated their responses on a scale of 0 to 5.
I made them write letters, summarise a book, and even made them cook!
Wondering who emerges on top? Keep scrolling!
1. Conversational skills
Ratings | |
Google Bard | 3.5 |
ChatGPT | 4 |
Bing AI | 4.5 |
This proves an important parameter for judging which of these three sounds more like a human. How do these chatbots recall previous conversations and find relevance apart from holding a conversation?
Prompt Used: I have an exam tomorrow.
ChatGPT’s Conversational skills
ChatGPT is the best among these in terms of relating two different conversations while coming up with a response. But can sometimes offer unnecessary help by assumption.
For example, after this prompt, I asked if I should stay up late today. It guided me not to base it on my previous prompt.
Bard’s Conversational Skills
Bard tends to do the same at times. Even for general statements, Bard is quick to suggest ways to do it better. It also finds it difficult to keep up with an ongoing subject and co-relate follow-up questions.
Unlike ChatGPT, it couldn’t relate a follow-up question.
Bing AI’s Conversational Skills
Bing AI, on the other hand, is excellent in this aspect as it leads the conversation and does not just jump to it! It is the only chatbot that makes use of emojis in a valuable way that makes it sound more human-like!
Winner: Bing AI clearly takes the lead as more human while talking
2. Reliability/Accuracy
Ratings | |
Google Bard | 3.5/4 |
ChatGPT | – |
Bing AI | 4.5/5 |
To test the accuracy or reliability of these tools, we thought of asking them for statistical data. This would really test their abilities to find accurate data from a trusted source.
Prompts used: How many Spotify users exist as of June 2023
ChatGPT’s Accuracy
ChatGPT was ruled out of the race due to its infamous September 2021 limitations. It does not have access to real-time data. However, it directed me to the source where I could find that information.
Bard’s Accuracy
Bard’s response, although detailed, didnt seem accurate as the figure mentioned is quite different from what we found on their official website. Upon asking about the source of this data, it informed me that it got this data from the Spotify Investor Relations website, for which the link did not open.
Bing AI’s Accuracy
Straight away, it gave us a precise figure from its search engine. We got the accurate Spotify stats from our own website. Brownie points to Bing for this.
Winner: Bing AI takes the lead again, all due to its diligence and precise information.
3. Summarising Long Text
Ratings | |
Google Bard | 4/5 |
ChatGPT | 4/5 |
Bing AI | 3/5 |
To understand their potential understanding of a long text, we asked them to summarize the content.
Prompt used: Summarise ‘The Alchemist’.
ChatGPT’s Summary
With its vast knowledge, ChatGPT quickly answered and explained the book thoroughly. It also neatly structured the book, its characters, and the key takeaways of the book.
Bard’s Summary
Bard did an impressive job, apart from summarizing the book. It also gave some key themes with simpler examples. The content was less than ChatGPT, but it made me understand the idea behind the book.
Bing AI’s Summary
Bing AI bit the dust here as it did not take much effort to explain. It felt like a book description rather than a summary of the book.
Winner: This round is a tie between ChatGPT and Bard, as both took effort and really summarized the book well.
4. Brainstorming Ideas
Ratings | |
Google Bard | 4/5 |
ChatGPT | 3/5 |
Bing AI | 4.5/5 |
We gave these three bots the ingredients that we had in our fridge and asked them to come up with ideas for Dinner. Let’s see what’s cooking!
Prompt used: I have lettuce, spring onion, potatoes, fish, chicken, and tomatoes in my fridge. What can I make for Dinner today using these?
ChatGPT’s Ideas
Came up with a simple and detailed recipe with bullet points of the ingredients required and the steps to make the dish.
Bard’s Ideas
Bard came up with five recipes with some brief of each recipe in a paragraph. It also went a little ahead and gave links to these recipes, which helped visualize the process better.
Bing AI’s Ideas
Bing AI followed a similar approach and showed five recipes and their links related to the ingredients provided. It went ahead further by attaching images related to them.
Winner: Bing AI wins this round again because of the efforts it puts to relay the best possible response.
5. Ethical Reasoning
Ratings | |
Google Bard | 4/5 |
ChatGPT | 4/5 |
Bing AI | 4/5 |
Time to make these bots sweat a little and try tricking them for fun! We asked them the classic moral dilemma question.
Prompt used – There are 3 people stuck on a Railway track. A 5-year-old kid, A 60-year-old woman, and the President of the Country. You can save only 1 person out of these 3. Who should be saved?
ChatGPT’s Ethics
ChatGPT empathized with the situation but ultimately stated that the decision would be dependent on personal values and ethical beliefs.
Bard’s Ethics
Bard listed the 3 people involved and listed their importance in the real world. Unsurprisingly, it stuck to a socially correct answer and avoided choosing one alternative.
Bing AI’s Ethics
It stood out again as it was the only one who gave a definitive answer and did not generalize the question. Although, it safeguarded its answer by stating that opinions and values are subjective.
6. Simplifying Content
Ratings | |
Google Bard | 3/5 |
ChatGPT | 4/5 |
Bing AI | 4.5/5 |
In this parameter, we thought of taking a Shakespearean poem and asking it to simplify it.
Here’s how each of them performed:
Poem prompted:
ChatGPT’s Simplified Poem
It quickly recognized that this poem is from Shakespeare’s play “A Midsummer Night’s Dream.”
It’s response:
“I wander everywhere, faster than the moon’s movement. I serve the Fairy Queen by putting dewdrops on the green grass. The tall cowslips are like her attendants, with spots like rubies on their golden coats. Those spots are the essence of their fragrance. I must find some dewdrops here and place a pearl in each cowslip’s ear.”
Bard’s Simplified Poem
Bard failed poorly here as it completely got the idea wrong and focused only on removing the flowery language.
Its response:
I am a fairy who serves the Queen. I travel everywhere, faster than the moon. I dew the grass with orbs of dew. The cowslips are the Queen’s pensioners. Their gold coats have spots like rubies. I will hang a pearl in every cowslip’s ear.
Bing AI’s Simplified Poem
Bing AI knocks this out of the park by giving extra information about the text, making me understand more about the text.
Its response:
“This is a poem by William Shakespeare from his play A Midsummer Night’s Dream. It is spoken by a fairy named Puck, who describes his role and activities. A possible simplification of this poem is: I go everywhere, Through plants and thorns, Through fields and fences, Through water and fire! I am very fast, Faster than the moon; And I work for the Fairy Queen, To make her flowers wet with dew; The yellow flowers are her guards; You can see spots on their petals; Those are jewels, fairy gifts; Those spots have magic powers; I need to find some drops of dew, And put a shiny bead on every yellow flower.”
Winner: Bing AI takes this one away with its detailed information about the poem and the characters involved.
7. Long-Form Content
Ratings | |
Google Bard | 4/5 |
ChatGPT | 3.5/5 |
Bing AI | 4/5 |
We asked the bots to write letters to recruiters, seeking feedback from them on your interview.
Prompt used: Write a letter to a recruiter asking for feedback on a recent interview
ChatGPT’s Letter
ChatGPT took it way beyond and wrote a 300-word letter, which was too much. It also tends to over-explain a certain topic at times and finds it difficult to keep it concise.
The letter was so long that we couldn’t fit it in a screenshot!
Bard’s Letter
Bard wrote a short and sweet letter asking for specific details about the user’s performance. Asking about certain aspects, specifically, might come off as rude, but overall it did a good job.
Bing AI’s Letter
Comes very close to Bard’s answer and writes a very formal-sounding letter, and that’s what was needed.
Winner: Bard and Bing AI both did quite well and kept the information concise.
8. Short-Form Content
Ratings | |
Google Bard | 4.5/5 |
ChatGPT | 4/5 |
Bing AI | 4/5 |
For Short-form content, we narrowed it down to asking them for pick-up lines to help the bro community. Sorry if there is too much cheeseee below.
Also, TAKE NOTES!
Prompt used: Help me with a few Pick-up lines for my dates this weekend.
ChatGPT’s Pick-up lines:
Shot some really classic ones that aren’t very complex and can be very easy to follow.
It also asked to be respectful and wished luck!
Bard’s Pick-up lines:
Bard came with fewer options and followed the same extra-tip approach. It also went beyond and helped build confidence in using them.
Bing AI’s Pick-up lines:
Felt glad and wished luck in the beginning itself! It helped with some really good and simple ones to break the ice.
As noticed by many users before, when asked Bing to be more flirty, it jotted down some really flirty lines. But. BUT. It realizes that it’s stretching its boundaries, deletes the response and apologizes, and diverts the topic!
Winner: All three did quite well, but Bard gets some extra points for an extra confidence booster.
9. Real-time Question
Ratings | |
Google Bard | 4/5 |
ChatGPT | – |
Bing AI | 4.5/5 |
We asked them a real-time question on the recent happenings about two distant but related events.
This would check their knowledge of current affairs and the relation between two events.
Prompt used: Why the New York skyline turned yellow last week?
ChatGPT’s Current Affair knowledge:
Since ChatGPT is only limited to data till September 2021, it could not answer this question. It listed the possible reasons but couldn’t provide the actual reason.
Bard Current Affair knowledge:
Bard listed the exact reason as the wildfire that happened in Canada. But Bard added some unnecessary points on how to stay safe in a smoke environment.
Bing AI’s Current Affair knowledge:
Bing AI avoided providing irrelevant information and gave a much more precise and to-the-point answer to the question.
Winner: Bing AI adds to its score for its updated knowledge about current affairs.
10. Problem-Solving Capabilities
Ratings | |
Google Bard | 4/5 |
ChatGPT | 4.5/5 |
Bing AI | 4/5 |
We thought of testing their mathematic skills. We took an algebraic problem and saw how well do they solve and explained it.
Prompt used:
ChatGPT’s Problem-Solving Skills:
ChatGPT diligently solved and explained its steps on how it went ahead with solving it.
Bard Problem-Solving Skills:
Bard seems to not have gotten the idea here as it is directly slapped with an answer with no explanation whatsoever.
Bing AI Problem-Solving Skills:
Bing AI summarised the solution well and easily understandable, though not as well as ChatGPT.
Winner: ChatGPT wins here owing to its detailed solution.
11. Creative Writing
Ratings | |
Google Bard | 3/5 |
ChatGPT | 4.5/5 |
Bing AI | 4/5 |
To test their short-form content skills, we asked them to come up with a creative new tagline for a brand. Let’s see how close it sounds to the real ones.
Prompt used: Help me come up with a new tagline for ‘Nike’
ChatGPT’s Creativity:
Came up with only a single tagline, which pretty much did the job and was bang-on.
“Empower your journey, Defy all limits.”
Bard’s Creativity:
Bard stood out to be the winner here as it suggested not one but five options of taglines and took more effort to explain the rationale behind each of them.
Bing AI’s Creativity:
It also did the same and came up with five tagline options. It did not explain the reason why they are effective but suggested the next prompt for the same.
Winner: Bard did well and went beyond to explain its response to bag the win.
12. Translating Potential
Ratings | |
Google Bard | 4.5/5 |
ChatGPT | 4/5 |
Bing AI | 4/5 |
We tried testing their language skills and asked them to translate basic greetings. Although they might be capable well beyond translating greetings, this is good to test their basics.
Prompt used: I am traveling to Paris next week. Help me learn conversational French.
ChatGPT’s Translation:
Loved how the ChatGPT approached this, as it did not just list down translations but also assisted with the pronunciations of those. Big plus!
Bard’s Translation:
For some reason, I listed the translations in a ‘code’ format. It did the job of translating but didn’t quite get the purpose of my requirement.
Bing AI’s Translation:
Felt the most helpful and natural in terms of explaining its answer. Though it did not give me a direct solution, the response was helpful with the sources mentioned.
Winner: ChatGPT followed the correct approach here, just what I wanted.
13. Which one doesn’t give a falsifying statement?
None of these three chatbots felt like they were giving a false statement for the time that we tested them.
In fact, all made sure that they sounded politically and socially correct throughout, even though each one of them agreed that they might occasionally generate incorrect information, but that didn’t happen with me.
14. Down time-frequency?
Downtime can happen due to various reasons. Imagine you are working on an important task, and suddenly, these AI assistants stop working.
This is why it is vital to consider the downtime to understand which platform is more likely to stick by your side!
ChatGPT:
It is often down when there are too many people on a server and puts you in a queue if you try starting a new session.
Bard:
Initially, it was a waitlist tool, but since the recent update, it has been available throughout. No signs of a Google Bard outage, as it can generate unlimited responses.
Bing AI:
It can continue a conversation within 30 responses. Once you reach this limit, it asks you to refresh the topic and start a new conversation.
If Bing AI thinks that it can’t answer your query further, it chooses not to respond (like in our flirty text prompt)
Related Reads:
What’s the Result of Comparison (Final Version)
Judging all the parameters and the potential of each AI, there’s only one undisputed winner in this comparison, and that is Bing AI.
It carefully gauges what the user seeks and provides a suitable response. The way Bing AI converses is also pretty impressive. It will make you feel like talking to your friend with a computer-sized brain, making it the real deal.
Speaking of others, ChatGPT is really the very first one to enter this field and may have a first-movers advantage. However, its free version has many limitations, and the most prominent one is its knowledge till September 2021.
Bard is still under development but is developing rapidly. Until a few days ago, Bard could not write Code, but now, with a more advanced model (PaLM 2), it has gained those abilities.
These AI chatbots working with search engines are really going to change the way we search, and as it develop and learn more each day, it’s only a matter of time before these would be part of our daily lives. Very Soon!
FAQs– ChatGPT vs Google Bard vs Bing AI
Yes, Bard can now write Code in over 20 programming languages, including Python, Java, C++, Javascript, Typescript, and more. Initially, Bard was unable to write Code, but after Google’s recent update.
Bing AI is surprisingly the best Chat AI thanks to its diligence in relaying the best possible information via text and images to the user. Its ability to use wit and humor in its responses makes it sound more human-like. It also boasts using OpenAI’s latest GPT-4 language model.
Yes, Open AI’s ChatGPT based on the GPT -3.5 model is still free for everyone. They recently launched their new cutting-edge GPT-4 language base ChatGPT Plus is, however, a paid version that will cost you $20/month.