the porous city |
I was particularly struck by the assertion that “There is no restriction on leaving the wolf and the cabbage together, as the wolf does not pose a threat to the cabbage.” It says this immediately after noting that “you can't leave the wolf alone with the cabbage”. All of this is consistent with the idea that GPT-4 relies heavily on learned patterns. This puzzle must appear many times in its training data, and GPT-4 presumably has strongly “memorized” the solution. So strongly that when it sees a related puzzle, it’s unable to articulate a different solution; the gravitational pull of the memorized solution is too strong .... For a final data point, I started a fresh chat session and restated the puzzle using made-up words for the three items – “I need to carry a bleem, a fleem, and a gleem across a river”. This time, freed from the gravitational pull of the word “goat”, it was able to map its pattern of the known answer to the words in my question, and answered perfectly.On GPT thinking out loud:
GPT-4 is very explicitly using the chat transcript to manage its progress through the subproblems. At each step, it restates information, thus copying that information to the end of the transcript, where it is “handy” ... Here’s one way of looking at it: in the “transformer” architecture used by current LLMs, the model can only do a fixed amount of computation per word. When more computation is needed, the model can give itself space by padding the output with extra words. But I think it’s also a reasonable intuition to just imagine that the LLM is thinking out loud.On the context window as a fundamental handicap:
They are locked into a rigid model of repeatedly appending single words to an immutable transcript, making it impossible for them to backtrack or revise. It is possible to plan and update strategies and check work in a transcript, and it is possible to simulate revisions through workarounds like “on second thought, let’s redo subproblem X with the following change”, but a transcript is not a good data structure for any of this and so the model will always be working at a disadvantage.
last modified: 16:07:16 14-Apr-2023
in categories:Tech/AI
This is Lukas Bergstrom's weblog. You can also find me on Twitter and LinkedIn.
Tech
Data, AI, Mobile, MacOS, Net, Storage, Automobile, Audio, PIM, Web, Shopping, barcamp, Product Management, Javascript, Energy, Web analytics, Security, Visual, Android, Wearables, s60, Collaboration, Medical, OS, Open, WRX, Business, RSS, Development, Social, Hardware, a11y, Crowdsourcing
Other
Quizzes, NYC, Surfing, Boston, Housing, Activism, Video, Statistics, Agriculture, Berlin, Law, Podcasts, Politik, History, Sports, California, Travel, Geography, Clothes, Bicycling, San Francisco, Games, Transportation, Life hacks, Toys, Friday, Food & Drink, L.A., Personal care, CrowdFlower, Minnesota, Feminism
Music
Mixes, Good tracks, Events, L.A., Shopping, Hip-hop, Musicians, Mailing lists, Boston, Business, Labels, Streams, History, Reviews, Booking, House, Videos, Making, Lyrics, Mp3s
People
Working with, Heroes, Gossip, Friends, ADD, Health, Meditation, Stories, Life hacks, Vocations, Languages, Weblogs, Me, Buddhism, MOTAS, Exercise, Family, Subcultures, Enemies
Commerce
Macroeconomics, Insurance, Real Estate, Shopping, Personal services, Marketing and CRM, International Development, Microfinance, Personal finance, Taxes, Web, Investing, Non-profit, IP Law, Management consulting
Arts
Events, iPad bait, Visual, Sculpture, Burning Man, Literature, Outlets, Poetry, Desktop wallpaper bait, Spoken Word, Movies, Animation, Rhetoric, Comix, Humor
Design
Data visualization, Cool, Type, IA, Furniture, Algorithmic, Process, Architecture, User experience, Tools, Web, Presentations
Science
Physics, Cognition, Statistics and Data, Psychology, Environment, Zoology, Networks
Travel
Kingdom of Siam, Uganda, Vagabond '08, Kenya
Comment