Google Bard can't answer questions about satellite internet

Google Bard can’t answer questions about satellite internet

Bard made false and inconsistent claims, overlooked pertinent evidence, and overlooked expertise.

In a previous post, I asked whether electronically guided antennas (ESAs) would replace satellite dishes in satellite earth stations. I did some research and concluded that it is likely they will. Subsequently, I discussed the same question with ChatGPT, and while he made several misrepresentations, he made a relevant point that I had overlooked. The relevant addition was good, but the errors were annoying, so I decided to try ChatGPT’s competitor, Google Bard.

Since the P in Chat GPT stands for “pre-trained”, I started by asking Bard if it was more up to date than ChatGPT and he replied: “Bard is able to access the internet in real time, while ChatGPT-4 is limited to a data set that goes only up to the end of 2021”. He added that it was last updated on June 7, 2023.

This was encouraging as antenna technology has improved since late 2021 and at least two new products incorporating ESAs have since been announced, BlueHalo’s mobile ground station and Thinkom’s gateway array.

But my optimism faded when I started by asking Bard to “list the advantages of E-Guide antennas over satellite dishes at earth stations for LEO Internet service constellations” and then I asked him to “list the advantages of satellite dishes over E-GD antennas”. The results of the two queries are shown here.

The answers are less complete and specific than those provided by ChatGPT and are inconsistent: Bard credits both ESAs and satellite dishes for being cheaper. There was also no indication that Bard knew about BlueHalo or Thinkom products, so I asked three main questions:

  • Have there been recent innovations in satellite earth stations?
  • Any new hardware?
  • Any new innovative product?

He didn’t mention either BlueHalo or Thinkom, but after the third try he suggested I give it a try Google search for “Innovative New Satellite Earth Station Products” and which returned many links, including one to the Thinkom Antenna Array.

My next question demonstrated an extreme lack of awareness. I asked if China’s Long March 5B rocket would be used to launch their GuoWang satellite constellation. Bard said yes and added that “it will be the first time a country has launched such a large constellation of satellites.”

Evidently, GuoWang is more frequently associated with Chinese propaganda than the SpaceX Starlink constellation in the Bard training set. (Refuse in, refuse out).

Red and blue text indicates errors (source)

One final example illustrates Bard’s inability to recognize competencies. I asked Bard, “Who is Larry Press,” and he replied with the error-ridden professional bio shown here. Red text highlights false claims and blue highlights work done in collaboration with others, not just me.

One can understand some of the errors. For example, I got my PhD. from UCLA, not USC, but I taught at USC for a while, and this could be mentioned more frequently in the training set. Likewise, I’ve been on several editorial boards, but I’m not anymore. I am an expert on my bio and a simple Google search would have found a somewhat dated but accurate short bio on the web.

I was also disappointed by the promise that Bard would provide links to references. Only quote references when it “directly quotes at length from a web page”. I only saw one reference during my experimentation and it was irrelevant.

“Artificial intelligence” programs have been successful in many specific tasks over the years. As an undergraduate, I was privileged to take a course by Herbert Simon and learned about his chess-playing and theorem-proving programs. Arthur Samuel’s checkers program has learned to beat it, and expert systems help doctors make some diagnoses. I even wrote an interactive concept capture modeling program that helped researchers analyze multivariate data. (It could be an internet service today).

Artificial neural networks have also been successful in specific tasks, from recognizing postcodes on envelopes to playing Go and Chess and helping robots flip through hoops, and will be successful in other tasks such as helping Microsoft Windows users or writing poetry. (I sent haikus written by Bard and ChatGPT to the GPTZero AI detector, and it reported that they were “probably written entirely by a human”).

But none of these programs are generally intelligent. The illusion of intelligence comes from being able to generate responsive grammatical sentences in conversation, but that’s a low bar. Many years ago, I placed Teletype terminals in a public library and people would sometimes converse for hours with ELIZA, a simple simulation of a nondirective therapist. The common practice of referring to errors as “hallucinations” furthers the illusion of intelligence.


#Google #Bard #answer #questions #satellite #internet
Image Source : circleid.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *