ChatGPT is a gamechanger at Wharton School of Business - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

The solving of mankind’s problems and abolition of government via technological solutions alone.

Moderator: Kolzene

Forum rules: No one line posts please.
#15263581
It looks like the chatbot technology is here to stay. An MBA business school professor says it is a gamechanger and can pass an MBA essay test or something. But it is a master bullshitter and will invent anything you tell it to invent regardless of truth or not. It can be used to cheat on exams.

What is your opinion on this new technology. Read this and tell me what is going on?

https://knowledge.wharton.upenn.edu/art ... 0%99s_Next
#15263588
I don't think passing a MBA exam is the best example of what ChatGPT can do. It can also pass the US Medical Licensing Exam, which should be harder:

https://www.medrxiv.org/content/10.1101 ... 643v2.full

I find it interesting, I don't think this means AI can replace your doctor but it should help doctors a lot. In particular, it's possible a doctor may work on more patients than now, making healthcare cheaper.

The same thing can be likely said about other professions. E.g. this type of bot will also help in management, even if it will obviously not become a manager.
#15263590
wat0n wrote:I don't think passing a MBA exam is the best example of what ChatGPT can do. It can also pass the US Medical Licensing Exam, which should be harder:

https://www.medrxiv.org/content/10.1101 ... 643v2.full

I find it interesting, I don't think this means AI can replace your doctor but it should help doctors a lot. In particular, it's possible a doctor may work on more patients than now, making healthcare cheaper.

The same thing can be likely said about other professions. E.g. this type of bot will also help in management, even if it will obviously not become a manager.


Again, I think the issue is basically why these tools that help free up people's time don't actually become useful in valuing human beings ability to dedicate themselves to really rewarding stuff. Instead the huge groups of humans who are stuck in poverty, with bad educations and little opportunities for advancement continues to trend worldwide. If these advances are not harnessed to serve poor people in poor nations and to advance all of the people in all of the nations and instead it becomes only for the nations with money and power it is useless. You will not change the world hording exclusively advanced tools just for societies that sell these products at a high price.

Pharmaceuticals are an example. What good is it if most people can't afford the high price of drugs, insulin and etc.? You got high quality pharmaceutical products but if only a small group can pay? How the hell does that make the world better? Socialize education and medicine. Get the for profit shit out of it and sponsor it via public funds controlled by taxpayers and put caps on profit or make them unable to market it if it does not benefit everyone in the society.

ChatGPT is a danger according to the article because it can be a catalyst for lying and inaccurate information and it learns but kind of lacks critical thinking skills and creativity that is independent of information and programming. It is programmable for lying too. Think about that? If all of the information you receive in the world might be false? How many political, academic and personal decision mistakes would you be likely to make if the information you are spoon fed and manipulated by is all false? It is a disaster Wat0n.
#15263646
@Tainari88 @wat0n

God damnit. I am really frustrated about this, the media and people who don't understand the technology are trying to push it for uses that will be stupidly bad.

CHATGPT is a statistical model for language processing meaning that you don't know how many misconceptions it learned that is being far far away from the truth. Also if it doesn't know the answer then it will try to synthesize one. While this approach might be good for answering "What is xxx?" questions or "How to xxx?" it is not good at answering complicated synthetic questions also IT DOES NOT have an understanding of concepts that might be needed to answer different questions like space awareness, time awareness and so on. It is much easier to explain it like this, if you ask chat gpt if 1 woman needs 9 months to give birth to a baby then how many months do 9 woman need to give birth to a baby. GPTs answer will be 1 month. Because it views it as a statistic.

The problem of application in medicine or any complicated subject is the error rate and it needs to be peer reviewed by specialists first. Just passing an MBA test that mostly consists of how to or what is questions is not the same thing as using it for treatment of people so the question will be more like Pulse xxx, Presence of bacteria or antibodies xxx and so on so what is the problem? This is a heavily synthetic question which CHATGPT will highly likely will not be able to accurately deal with. It knows language but it doesn't know correlations between different bacteria, antibodies or any other medical parameters that well to put them all together and the produce an analysis.
#15263672
JohnRawls wrote:@Tainari88 @wat0n

God damnit. I am really frustrated about this, the media and people who don't understand the technology are trying to push it for uses that will be stupidly bad.

CHATGPT is a statistical model for language processing meaning that you don't know how many misconceptions it learned that is being far far away from the truth. Also if it doesn't know the answer then it will try to synthesize one. While this approach might be good for answering "What is xxx?" questions or "How to xxx?" it is not good at answering complicated synthetic questions also IT DOES NOT have an understanding of concepts that might be needed to answer different questions like space awareness, time awareness and so on. It is much easier to explain it like this, if you ask chat gpt if 1 woman needs 9 months to give birth to a baby then how many months do 9 woman need to give birth to a baby. GPTs answer will be 1 month. Because it views it as a statistic.

The problem of application in medicine or any complicated subject is the error rate and it needs to be peer reviewed by specialists first. Just passing an MBA test that mostly consists of how to or what is questions is not the same thing as using it for treatment of people so the question will be more like Pulse xxx, Presence of bacteria or antibodies xxx and so on so what is the problem? This is a heavily synthetic question which CHATGPT will highly likely will not be able to accurately deal with. It knows language but it doesn't know correlations between different bacteria, antibodies or any other medical parameters that well to put them all together and the produce an analysis.


I'm a statistician so I'm not overrating anything here. It seems the USMLE also considers complex scenarios closer to real world conditions (i.e. it isn't just a matter of googling the answer). For the bot to get those answers right, the bot likely drew from similar existing cases in its training sample, making it quite useful as an assistant.

One criticism I've read is that ChatGPT has just too many parameters and the model could be made simpler and cheaper to deploy with similar results. That is, it's lacking some regularization. But this is a very technical criticism, even if it's indeed important from a practical point of view.
#15263673
wat0n wrote:I'm a statistician so I'm not overrating anything here. It seems the USMLE also considers complex scenarios closer to real world conditions (i.e. it isn't just a matter of googling the answer). For the bot to get those answers right, the bot likely drew from similar existing cases in its training sample, making it quite useful as an assistant.

One criticism I've read is that ChatGPT has just too many parameters and the model could be made simpler and cheaper to deploy with similar results. That is, it's lacking some regularization. But this is a very technical criticism, even if it's indeed important from a practical point of view.


It does draw from text it was taught on so obviously it is not just googling the answer and that is why it can answer better than just a google search but that has been my point, in its core, chatgpt is a more smarter with context search engine. But a search engine is still a search engine, it can not answer questions that can't be easily searched or that have not been invented by humanity or not understood by us or require complex understanding of time, space and so on to answer if it is a part of that question.
#15263683
Again, @JohnRawls the assistant is fine, but if you need complexity it will fail.

The good thing about all this technology might be time saving stuff for basic research into a specific subject.

That is why my translation and interpreting business is booming now. Because interpreting is not something simple at all from one human language in real time to another. It is complex. The machines can't get it right yet. It requires some analysis.
#15263689
ChatGPT seems to be the "It" technology to invest in these days.

The state university I attend sees AI use in the classroom as cheating. I have to agree. If you took the effort to apply to college and got accepted, why not spend more effort to write your own papers? Writing is not hard.

Wouldn't it be crazy if 90% of all business emails were generated by ChatGPT?
#15263694
Tainari88 wrote:Again, @JohnRawls the assistant is fine, but if you need complexity it will fail.

The good thing about all this technology might be time saving stuff for basic research into a specific subject.

That is why my translation and interpreting business is booming now. Because interpreting is not something simple at all from one human language in real time to another. It is complex. The machines can't get it right yet. It requires some analysis.


And an AI may miss all the cultural subtleties that may be necessary for translating a text. If you want to translate a text to Spanish, what type of Spanish are you translating it to exactly? Mexican, Puerto Rican, Spaniard, Chilean? They are not the same. And an AI might not get the difference.
#15263706
wat0n wrote:And an AI may miss all the cultural subtleties that may be necessary for translating a text. If you want to translate a text to Spanish, what type of Spanish are you translating it to exactly? Mexican, Puerto Rican, Spaniard, Chilean? They are not the same. And an AI might not get the difference.


You can program it to take into consideration geographical differences in registers. The problem is not that. The problem is being able to capture meanings that are not literal, metaphorical, poetic, and have to do with human beings being creative thinkers or being to evolve language to fit changing conditions. That only happens with independent critical thinking living beings. Human beings are hard wired to acquire a human language. They are predisposed to that. But the way their environment is set up plays an important role. Piaget's theories on childhood development are basically not valid anymore. Language is incredibly difficult to reproduce for a machine that depends on programming from an external source. Our programming is about nature. The natural world picks our endemic talents and proclivities. Computers are programmed by other humans. They are limited in their own experiences that is why you have biases and prejudices showing up in human programmed computer systems. That favor middle class people, or males, or white males or English speakers and or this or that. Because the programmers are limited. Trying to get a computer to have the same innate intuitive expansiveness that human beings have? It is hard as hell. Natural science is hard to be false. It follows natural law in the scientific method. If there is a wrong result you discard it and have to continue on.

Humanities and Social science is about social construction. That is manipulated by interests that have another set of standards. You can be lied to with political theories all day long. People believed slavery was legit. Because the need for labor of humans who were considered property and without human rights was part of the law of the land. Not that it was a legit form of treating fellow humans. It was an accepted form of economic development.
#15263712
Tainari88 wrote:You can program it to take into consideration geographical differences in registers. The problem is not that. The problem is being able to capture meanings that are not literal, metaphorical, poetic, and have to do with human beings being creative thinkers or being to evolve language to fit changing conditions. That only happens with independent critical thinking living beings. Human beings are hard wired to acquire a human language. They are predisposed to that. But the way their environment is set up plays an important role. Piaget's theories on childhood development are basically not valid anymore. Language is incredibly difficult to reproduce for a machine that depends on programming from an external source. Our programming is about nature. The natural world picks our endemic talents and proclivities. Computers are programmed by other humans. They are limited in their own experiences that is why you have biases and prejudices showing up in human programmed computer systems. That favor middle class people, or males, or white males or English speakers and or this or that. Because the programmers are limited. Trying to get a computer to have the same innate intuitive expansiveness that human beings have? It is hard as hell. Natural science is hard to be false. It follows natural law in the scientific method. If there is a wrong result you discard it and have to continue on.

Humanities and Social science is about social construction. That is manipulated by interests that have another set of standards. You can be lied to with political theories all day long. People believed slavery was legit. Because the need for labor of humans who were considered property and without human rights was part of the law of the land. Not that it was a legit form of treating fellow humans. It was an accepted form of economic development.


You can program the bot to take geography into account, but it's not clear it will do a good job for the reasons you mentioned. I agree, I also don't see AI taking over anytime soon.

It can be a good assistant, however.
#15263726
late wrote:Really?

I'm not trying to start an argument here, but I did love his work.


Linguistics are a very demanding field of study within cultural anthropology Late. They have vetted Piaget and his theories on childhood development are not valid anymore. There is the reality that young children and infants have a natural affinity for acquiring spoken language and processing it. The developmental stages and why certain children vary in the stages and respond in differing ways to learning is a lot more complex than what was initially theorized.

Who said this? Noam Chomsky, in his book Understanding Power the Indispensable Chomsky Edited by Peter R Mitchell and John Schoeffel by the New Press. Page number 217 and 218 in response to a question about compassion being learned and that it could be taught. If you did not learn it as an infant or young child that later on you would be incapable of learning it. It turns out his experiments were interesting but had been proven to not hold water when challenged by the scientific method late. The truth is that humans are incredibly untapped in potential. We have not even scratched the surface of what we are capable of being and becoming. Both the negative side and the incredibly wonderful and positive side. It is basically a crisis of values that have to be dealt with. Something that has to do with huge groups of people working hard and anonymously for change and a shifting of the conditions of change that are required first.

Juan Dalmau needs to be the governor and the isla[…]

Whats "breaking" here ? Russians have s[…]

@Puffer Fish You dig a trench avoiding existin[…]

Russia-Ukraine War 2022

One song for Ukraine: ... serb , you are wrong[…]