If you had a perfect assistant, would you want it?

Continuing a topic we discussed over drinks last night here.

Last night we posed a hypothetical question… if you had an AI that was perfect at understanding your context, your goals, your personality, etc. and it could give you perfect advice on how to accomplish your goals (think health, relationships, career, personal growth, engagement w/ society, etc.), AND it could give you a clear understanding of how your actions affect others and society, would you want it?

A few comments I heard:

  • Mistakes are a part of our identity/growth.
  • We wouldn’t have the discipline to follow the advice (but it might know this and take it into account by either encouraging you or tailoring advice to what you can accomplish.

And the next layer of consideration… if everyone had an assistant like this, how would we manage when individual (your) values conflict with societal values? This gets directly to the question of control… would you want your personal AI to persuade you to have more alignment with social goals (others’ good). Might we mandate, as a society, that individual AIs conform?

1 Like

Check out the long pause from Elon… it really got me thinking about how much AI will shift human life.

On the topic, I can see areas like health, career, and learning benefiting from AI for better personalized recommendations. BUT… I’m not sure about life… and this leads to more questions…

Is it possible? If I don’t know my ultimate life purpose yet, how could AI know?

Who am I? Why am I here? What are the lessons in this life? What’s my goal in life? These are questions that not everyone knows the answers. For some, that could be a lifelong question to figure out.

If it’s possible… then, what’s the meaning of life…?

There are tools that help people know themselves better, like Gallup assessments, MBTI personalities, or horoscopes :grin:. If AI can integrate this data and come up with a comprehensive report, that might be a good reference, but just a reference like a weather report.

It’s funny that the first thought that came to my mind is… we use AI to do our work, and now we ask AI to tell us how to live our lives… then it becomes your AI talks to my AI… :scream:

Furthermore, AI can make up compelling untruths… I can’t imagine that world… not sure what I can trust anymore… I foresee that mental health will become a more critical issue…

So, do I really want my life directed by AI? For sure… No… not even my parents can.:laughing: I am grateful to be a human with free will, intuition to follow my heart, and the beauty of emotions and feelings. It may not be a perfect life, but it’s me life, a unique journey of mine - wabi -sabi. :innocent:

1 Like

That “long pause” is incredible. :slight_smile: I like your analogy of a weather report. The complex minds and bodies we inhabit along the experiences that have molded them make me believe that it will be a while before we have a perfect mechanistic understanding of our consciousness… yet our tools for explaining and predicting our behavior are already so much more than personality assessments. For example, recently researchers have demonstrated generating recognizable images from brain scans. Like the weather, we may not understand every turn of a micro-climate, but we can predict patterns reliably in the near future and guide practical decision-making. If we have a kind of “fog of prediction” that guides our behavior–higher confidence of the effect of immediate actions towards achieving our goals and lower-confidence, but still directionally-valuable, predictions of longer-term effects (e.g., exercising, eating healthy, building quality relationships)–then maybe a “perfect” assistant would be simply pushing that fog further out so our vision is more clear. It might guide us into better mental health, better living. Certainly I can imagine it being a better guide than many parents while allowing us to maintain agency–free will–following our heart.

Some properties, then, of our “good enough” assistant:

  • It understands the context of your daily life.
  • It understands your current and future goals.
  • It understands neuro-diverse approaches towards communicating with (specifically) you–meeting you where you are at.
  • It has an understanding of transformation and growth and would motivate you in sustainable ways.
  • It has access to information and resources that it can provide at the right time.
  • It has endless patience and focus.

Is pain required for growth? Is time? Certainly, learning to overcome stressful challenges gives a sense of safety, a sense of personal achievement… but our perfect assistant would take this into consideration while helping us move towards our goals of autonomy and interconnectedness.

Would we even know what our goals were in a post-scarcity world? Would we know our desire if our core needs were met? Are we actually, in ways, much simpler and more alike than we care to imagine?

1 Like

I am almost convinced by the statement- what a wonderful assistant with endless patience and focus. :innocent:

Thinking from Maslow’s hierarchy of needs, I am certain that the assistant can excel in fulfilling needs at lower levels. However, when it comes to love and belonging, and above, which are more abstract, I tend to be skeptical, but maybe possible?

If an AI assistant can help humans meet their lower-level needs, enabling them to pursue higher self-fulfillment and find their ikigai, this sounds more promising to me. Similar to Bloom’s taxonomy of learning, with an AI assistant, learners can focus on higher-level learning instead of merely memorizing facts.

The formula for happiness and self-actualization varies for each person. Humans can be simple and/or complex. It’s a matter of personal choice. If the perfect assistant becomes a reality, I’d be happy to be a tester. :nerd_face:

1 Like