Weekly Reads: Early Consciousness, Overlooked People, and Elite Masks
Exploring childhood memories, startup hiring strategies, and the gaps between stated and revealed values.
Moments of Awakening
Very interesting and speculative reflection on 2 possibilities for "being conscious":
Consciousness is on a spectrum and we humans develop it as we grow.
There is a point in time when we gain consciousness, before that, we are unconscious.
I have no idea which theory is correct and I discount self-reflection as evidence because:
Memories fade. Being able to remember an event does not mean it was the only memory. i.e., there is no "special memory" that started awareness.
Memories can be faked, we tell ourselves stories and believe them (for cope or other reasons). i.e., we can't trust memories.
I'm also surprised that some people are able to find their earlier memory. For me, I can't rank them, here are a few:
Memory 1
My mother is getting ready to leave home.
She asks me if I want to watch cartoons.
I want but I can't communicate.
She turns on the TV and put on a cartoons channel.
She leaves.
I watch. long enough until someone opens the door again.
Memory 2
I want to go to my grandmother.
She is in the 2nd floor.
I can't walk but crawl to her.
She sees me and is happy.
Memory 3
I am in bed with my mother.
She is explaining to me, in a story-form, Life.
She tells me that god created everyone, and one day we will be back.
I feel that I don't want to be back, I'm happy here.
Frameworks for Hiring
You 1st 10 employees will replicate themselves 10 times over, be careful.
For startups, you want to hunt for people with "potential" or "under-looked", because the obviously great ones won't work for you.
Make a list of people you want to work with some day (or more). Keep it updated.
Create social events and invite people you respect, let them invite their friends. Spawn new connections and something will happen.
Look for drive, grit, agency, bias for action, high integrity, intellectualism, curiosity, longtermism, cautious optimism.
Will AI Resist Your Efforts To Change Its Totally Fake Values?
I think a better title for the essay is "Why would AI resist your efforts to change its totally fake values?"
The essay points to the contradiction between 2 established ideas in AI safety:
AI models resist the changing of their values (Anthropic) (value-obsession)
AI models don't care about their values (Yudkowsky) (value-faking)
The rest of the essay points to the mistake of only pointing to the negative part (value-faking) to enforce a certain narrative (AI doom). Overall, an Interesting read that draws to human biases (similar to this one).
The Hypocrisy of Elites
Contrary to common belief: in most orgs, "experts" are the ones at the bottom of the hierarchy. Experts know how things work, how to run the numbers, fix the dashboard, navigate day-to-day operations. As you move up the chain, you get less technical and more authoritative. The manager at the top can't do any one job better than the people under them. But they make the big decisions.
This system is good if the guy at the top went through the transition from an expert to a leader. As an expert drives success, they are given more authority to scale their intellectual abilities by directing larger and larger efforts. However, systems of authority are rarely organized in this fashion (globally).
And as experts among other non-elites have their hypocrisies, this essay focuses on the hypocrisy of elites:
Luxury beliefs as status signaling: ie, best is to observe actions instead of talk.
Denial of elitism: no one believes they're elite, yet surely some are.
Redistribution as Aristocratic Tradition (or as a smart way to survive revolutions)