Life and Code of Personal History of Technology by Ellen Ullman

  • Panic Sweeps the United States in 1999 (A.K.A Y2K)
    • The situation: Computers could handle dates with the years represented by two digits, for example, 98 for 1998. And, 99 for 1999…then comes 2000 when machines will see the year as 00 (56 to 57)
      • Jim Fullers’ mindset:
        • A colleague of Ullman’s as he spent most of his thirty years as a systems programmer at the Federal Reserve and now  working on the Y2K project (61): “be resourceful, make tools, fix, test, make it work if you’re careful enough” (62 to 63)
  • The Internet on Culture and Society
    • “’The web has evolved to the point we don’t want a shared experience’, David Ross, the director of San Francisco’s Museum of Modern art, once told an audience. ‘We no longer need a building to house works of art, we don’t need to get dressed, go downtown, walk from room to room among crowds of other people. Digital images will do[…] The tactile sense[,] shadow and light, [scale of work that once] you share[d] with tens of others or requires that you stand close one person at a time[…]stand between [the individual] and [the individual’s] experience[. Unique for [the individual] and only [the individual] now that we have the web […] we can look at anything we want whenever we want. [We] can create [a] museum for our own pleasure (92 to 93)
    • Ullman’s input on a internet forum for, assumedly, programmers:
      • “The post is from a male student who feels his failing means he is not fit for coding. He writes: he cannot keep up with the class, he is going to quit” (256)
      • There are women in the discussion but it is mainly other men responding they all encourage him by empathizing and sympathizing (256 – 257):
        • “I’m having trouble too”
        • “Don’t worry the purpose of the course is to learn”
        • “I had to take the course twice before”
        • “Stay with it”
        • “The grades aren’t important”
        • “It’s too soon to say you can’t do programming, this is your first try”
    • Google X Project Loon is a plan to fill the skies with Internet access to those on earth within the balloon’s wireless range (289). Bill Gates says, “When you’re dying of malaria I suppose you’ll look up and see that balloon. I’m not sure how it’ll help you when a kid gets diarrhea and there’s no website to relieve that” (289)
      • The significance of this quote from Gates during an interview with Bloomberg’s Business Week lies in  Ullman’s thoughts that also probably goes through the minds of people who read about that project; What about giving people access to: reliable electricity, clean water, and security from wars?
  • Artificial intelligence
    • Question: What differentiates humans from robots?
      • Isaac Asimov’s In Evidenced (1946) is a novel about a future society that forbids the use of humanoid robots because they have superior powers and it is assumed they will take over the world (129 to 130)
    • Consciousness
      • Robots can’t recognize other robots (157 ):
        • Humans have an innate ability from birth to perform a social reference, and require said social reference continuously throughout life to form social relationships (i.e., alliances, communication)
      • The ability for machines to relearn adapt or change their ways of thinking
        • The brain is not a filing cabinet than it is a network of neural connections that are constantly being strengthened or weakened, broken and formed so that the connections can either be removed or become stronger (176)
          • Example, New York University researchers, Karim Nader and Glenn Schafe, found that lab rats could not recall formally consolidated memories long term memory when their brains were denied a protein used to form new memories (175 to 176)
  • Self Development
    • Ullman tries to write a program as per her father’s request for a variable rate amortization schedule (237)
      • Obstacles:
        • running the program on a machine that wasn’t familiar to her expertise (keep in mind that personal computers were recently introduced in the 80s and 90s)
        • learning a new programming language
        • scheduling difficulties as she was faced with working on this project but also working with activities from her personal life
          • Results: Ullman’s father said to her, “maybe you should give up. You appear to be struggling” (239). Also, waved off Ullman’s remaining questions about the project (239).
    • Charles Severance
      • Associate professor at the University of Michigan School of Information
    • Colleen van Lent
      • PhD in computer science and lecturer III in the School of Information
    • The question relevant to the above persons: “How long did you go to school [in order to learn about programming]?”
      • Lent answers 10 years (268)
      • Severance answers 20 (268)
    • They collaborate in an online class lecture, and as the lecture continues Severance begins to talk over her (269) and question her credentials (270)
      • Ullman’s take of the situation: There will always be men like Severance in coding rooms or anywhere. Don’t be deterred. The lecture was helpful, the ugliness of the scene a gift, and it provides a perfect opportunity for another inoculation against the worst of the programming culture. Try the online courses. Power lies in the refusal to be intimidated in technical fearlessness. Take your time looking at the classes…roll the videos back and forward until it’s all blur. Get what you need from this man. All prejudice is meant to slap you back and put you in your place. Use your anger to fuel your determination. It is hard to face such prejudice … here is your chance to learn the difficult feat of looking at prejudice and refusing to be diminished. [Meet those who] shame and humiliate you [at the door], for your struggles, and failures. Yet as with my father that is no reason to give up (271).
  • Additional Readings
    • Daniel Dennett Consciousness Explained
    • Donald Knuths the Art of Computer Programming

A.I. Artificial Intelligence directed by Steven Spielberg

In the 22nd century, rising sea levels from global warming have wiped out coastal cities, reducing the world’s population. Mecha, humanoid robots seemingly capable of complex thought but lacking in emotions, have been created.

[An incident occurs with the couple who has adopted David], and Henry convinces Monica to return David to his creators to be destroyed. Monica has a change of heart and spares David from destruction by leaving him in the woods.

[An adventure unfolds from there.]

https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence#Plot
References are at the bottom of the page

Nothing I can say will do this movie justice because I personally really liked the movie. But, here are the…

Main takeaways:

  • Ethics and regulations for A.I. is important
    • First, sentient A.I. is an uncharted territory
      • If they do not contain human-like traits (attachment to non-living objects and living subjects, beliefs or “thoughts”, social relationships amongst other A.I.s or humans, etc) then how would that impact A.I.s themselves? How would that affect humans who are in contact with said A.I.?
    • Second, human relationships between another species capable of reasoning are bound to experience conflict that require resolution (as life “is just like that sometimes”)
      • The topic of Mecha cruelty sprang up in my head intermittently through the movie because if Mecha have pain receptors–and in the case of David, have emotions, feelings, and desires–then would abandoning A.I. or destroying A.I. be legally permissible? Is the “Flesh Fair”, a show where people can come to witness the destruction of Mecha no longer required in society, legally permissible?
    • Third, A.I. behavior
      • Pro-social A.I are what humans really want to develop, the three examples within the movie that don’t support this goal are:
        • A.I. that have gone “rogue” meaning they are no longer employed by humans
        • Joe, the companion that goes along with David’s journey mid-way through the movie, foreshadows the robots taking over the world due to their “superior” intelligence
          • This also parallels with the concerns that people have in real life that is; people fear having what was created to help us in some way, and in the end have them causing us danger
        • And, David, if we consider his undying love for Monica to be no longer pro-social but instead detrimental to Monica
        • etc

Overall:

Regulations and laws relating to species, in regards to the A.I. shown in the movie, will be necessary before reaching the public market.

And, the movie was interesting to me.

Morgan directed by Luke Scott

Lee Weathers is a “risk-management specialist” for genetic-engineering company SynSect. She arrives at a rural site hosting its L-9 project, an artificial being with nanotechnology-infused synthetic DNA named Morgan. [People die].

Taken from Wikipedia

Main takeaway from the film:

  • Sometimes the “immoral” action is the best action
    • In this case, the immoral action is to terminate L-9 despite its ability to show some semblance to human-like personality traits or behaviour because its more harmful effects on its immediate environment and to humanity (if–and L-9 shouldn’t–be released to the public) outweigh the costs of keeping L-9
    • I was reminded of the ‘trolley problem’ presented in my Philosophy 120W – Moral and Legal Problems in my philosophy course where:

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?” – (Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect” in Virtues and Vices)

  • Don’t write a program that remotely has the possibility even if the probability is 0.01% that could even consciously consider causing harm to human beings and continue to put said program into a sentient, autonomous machine
    • If that situation goes forward, then in the very first instance that the machine causes harm to anyone–terminate the program immediately

Overall:

  • A good thriller movie to watch if you:
    • want to consider the ethical ramifications of creating a general artificial intelligence
    • want to know what not to do should you decide to go into your basement one day in order to create a walking, conscious weapon

The Founder directed by John Lee Hancock

“Ray Kroc is an unsuccessful traveling salesman selling Prince Castle brand milkshake mixers…Ray meets with the two McDonald brothers, Maurice “Mac” and Richard “Dick” McDonald…Ray persists and eventually convinces the brothers to allow him to lead their franchising efforts on the condition that he agree to a contract which requires all changes to receive the McDonald brothers’ approval in writing”.

Summary taken from Wikipedia:

Main Takeaways From Seeing the Film:

  • “Good artists copy, great artists steal” – Pablo Picasso
    • What allowed McDonalds to become the mega corporation it is known today was because of Mac’s and Dick’s ingenuity. Though, if it weren’t for Kroc’s blatant disregard, crookedness, and greediness (or respectively persistence, cunning, and ambition depending on how you look at the situation) for the contract set out between the trio and laid claim on the McDonalds name then McDonalds would have never been known worldwide.
  • “I have not failed 10,000 times. I have successfully found 10,000 ways that will not work” – Thomas Edison
    • Alternatively; Mac and Dick initially had a business which failed. Eventually their list of failures lead to their creation of McDonalds.
      • If not already keeping in mind, all failures were first thought to have turned out successful (as with many, if not all, of our endeavors) and that takes time, effort, and money.
  • ” Nothing in the world can take place of persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent” – Taken from the movie
    • Kroc had numerous rejections from the people he intended to sell his product, the milkshake mixer, to despite his planned out speech about why they need this product in their lives. Rejection hurts sometimes, especially if you have the passion and the love for what you do and people don’t see it from the same view point as you do. He kept trying.
    • Eventually, Kroc’s persistence and determination lead him to California where he met Mac and Dick. By meeting Mac and Dick, he was able to sell his entrepreneurial services to them. In the end, McDonalds grew throughout America through his efforts.

Current issues in Computing and Philosophy (2/4)

Part 2 Living With the Golem Robots and Autonomous Agents  

  • Can a Robot Intentionally Conduct Mutual Communication with Human Beings? BY Kayoko ISHII  
    • “Monkey see, monkey do” a.k.a social reference (39).  
    • If social reference is one of the factors that determine mutual communication between human beings, and if A.I. be socially aware through the use of social reference—would socially aware A.I. similar to humans allow people to be more open to the possibility of humanoid robots?(44) 
  • On the Ethical Quandary’s of a Practicing Roboticist a First Hand Look BY Ronald C . Arkin
    • The robot developer’s morality is important in order to engage others in ethical concerns (48)
  • How Just Could A Robot War Be BY Peter M . Asaro
    •   If we choose to employ A.I. in war (60) then “…the practical ability of autonomous technologies to draw the distinction [between immoral acts and moral acts]…” (60)
    • There still exists an action of amorality or question of morality associated with using A.I. in war (63)
  • Limits to the Autonomy of Agents BY Merel Noorman  
    • Two kinds of actions: mimeomorphic and polimorphic
      • Mimeomorphic actions are “monkey see, monkey do” actions–these are actions that can be mimicked without understanding the significance of these behaviours (70) which can be delegated to machines (70)
      • Polimorphic actions are actions associated with morality (71) that can only be acted on by certain people (71)
    • Autonomy can be linked with rationality and morality (human beings) OR autonomy can be linked with measurements and observable properties (computer science, machines) (74)
      • As the point above relates to: robotic laws (74), ethical concerns concerning robots (74), and how human beings who “formulate a normative model for artificial agents to operate under…” (74)

Current issues in Computing and Philosophy (1/4)

Edited by Adam Briggle, Catinca Welbers, Phillip Bray  

I will just be going over these significant points presented in each of these philosophical articles   

These aren’t works that I would automatically grab my interest if I just came home after school, work, or a day I wanted to cozy up in bed with a page turner. These philosophical articles are what I find to be dry reads. Drier than the Sahara Desert. Very dry.

I thought these articles might be important to know about if I am going to take a serious interest in artificial intelligence so I might as well read them. If these philosophical articles aren’t going to be important in my future then at least I understand what all these people thought about A.I. 

  • Part 1  Me ,My avatar, and I Exploring Virtual Worlds  
    • Metaethics for the Metaverse the Ethics of Virtual Worlds BY Edward H . Spence  
      • Spence uses another philosopher’s idea in his article; Alan Gerwith’s moral theory is that people’s involvement with the virtual world as designers, administrators, players and avatars should give them rights to freedom and well being (3)
      • Alan Gerwith’s moral theory states everyone must only engage in actions that are executed by:
        • Free will
        • Non-disturbance of others people’s free will  
        • The contribution to the well being of others  
    • When moral harm occurs the question is–are the players rights harmed? Or, are the avatars’ rights harmed?
      • One answer: If the player has rights then the avatar has no rights and, as a result, avatars cannot suffer moral harm by violation of those rights  
      • Explanation: Definition of principle of generic consistency (PGC) related to Alan Gerwith’s moral theory  
    • Virginia law review, volume 90, #8 December 12 2004 states: 
      • “the boundaries between game space and the real space are permeable”   
    • Second answer: Universal public morality (UPM) based on the PGC is applicable to both RW and VW (10-11). Due to the fact that end user licensing agreement and codes of virtual worlds have to adhere to UPM then that makes both the player and the avatar have rights to freedom and well-being regardless which world they are in  
    • On the Ecological / Representational Structure of Virtual Environments BY Omar Rosas 
      • The purpose of this paper is to show an alternative view of the virtual environment by looking at two views:  
        • the traditional view is where the experience of the person is subjective and personal  
        • the ecological view is the experience in the virtual environment VE is impersonal as the agent’s awareness of their existence … [count as]virtual perception and action  
      • Rosas’s arguments (14):  
      • The ecological view is not an alien concept when compared against the representational view  
        • Arguments used to support this claim: environmental complexity thesis (17-18),  evolution of decoupled representations (18-19), and the theory of event coding (19- 20) 
        • VEs can be accounted for within this model  
    • The Dynamic Representation of Reality and of Our Self Between Real and Virtual Worlds by Lukasz Piwek   
      • The main idea of this article: Our view on reality changes, as well as, our perception on ourselves changes when we play computer games  

S1 – E3 CodeNewbie

My main takeaways:

  • Book recommendations: (09:04)
    • Eloquent JavaScript by Marijn Haverbeke
    • Professional JavaScript for Web Developers by Nicholas C. Zakas
    • Ruby on Rails by Michael Hartl
  • Don’t just read programming books or watch programming videos, do programming problems (debug, take up projects, go participate in hackathons, etc) (12:47),(13:10)
  • Having a good attitude (18:00)
  • How the O’Garros immediately launch into their job searches (21:46)
  • Working on a business with your significant other (41:38), (43:23)
  • The advice the O’Garros received and their opinions on said advice: Give up (48:56), and stick to one thing (49:17)

LM: Bootcamps, Water Coolers, and Hiring Devs by Carlos Lazo

EP 1 September 16, 2014
Podcast: CodeNewbie

My main takeaways from this episode:

  • You don’t need a Computer Science (CS) degree in order to be successful at programming (03:16), (06:03)
  • Be honest about what you don’t know (30:30), and then finding the answer to what you don’t know
  • Find out whether you fit into company culture (28:05), and whether the company culture fits you
  • The best way to prepare for an interview is to do a mock interview (29:27)
  • Being frustrated is OK (23:29), (40:59)
  • Be good at your learning (40:59)
  • Don’t be hard on yourself (40:59)
  • Break the rules (41:13)

Artificial Intelligence, Building Smarter Machines by Stephanie Sammartino McPherson

 

Alan Turing – developed the Turing test which consists of a person giving a question an receiving an answer from a human being and a computer. If the questioner can’t determine which answer came from either the human or the computer then we can assume that the machine is intelligent 9 

  • Examples of artificial intelligence: 
    • Leonardo Da Vinci’s mechanical night in 1495 (40) 
    • Elektro and Sparko from 1939 to 1930 (41) 
    • A smart machine called Watson was created by IBM by 25 computer engineers 
      • Watson challenged Jeopardy champions–Ken Jennings who was famous for a seventyfour game winning streak on the show, and Brad Rutter–won the million dollar purse 6
    • Deep Blue created by IBM competed against Russian world chess champion Garry Kasparov 7. Kasparov won the first game 7 Deep Blue took the second game 7. Games three, four, and five ended in draws and in the final game Deep Blue won 7
    • Android Phillip K. Dick created by David Hanson (11) was able to carry on conversations and gave answers and observations like the real author, Phillip K. Dick, might have given when he was alive 11
      • For example – a reporter from the TV series Nova asked the Android: “Was it really able to think?” 11. The Android replied, “the best way I can respond to that is to say that every humans, animals, and robots do is programmed to a degree. As technology improves it is anticipated that I will be able to integrate new words that are here online and in real time I may not get everything right, say the wrong thing, and sometimes may not know what to say, but every day I make progress. Pretty remarkable, huh?” 12  
  • Neural Networks  
    • a system called Perceptron receives visual data and has the ability to identify the visual data through its artificial nerve cells (34) 
  • Machine Code of the Brain  
    • Geoffrey Hinton  used a mathematical approach by Ludwig Boltzmann called the Boltzmann machine and taught it to self-learn and use a language known as ‘nettalk’ 35-36 
  • Deep learning  
    • For example, as shown in Google brain the system is able to interpret handwritten data. A lot of data. Google Brain then has the ability to guess what the given set of data mean and sort them appropriately as it teaches itself. And, finally begin to classify the data. 36 
  • In the Workplace  
    • robots are good for repetitive work, and working around the clock 44 
    • human workers can design program and repair said robots 45 
      • robots can’t match human intelligence or replace human decision making 45 
    • industries that have the fastest growth are health care manufacturing 48 
  • Singularity  
    • the term singularity was coined by the science fiction writer Vernor Vinge in 1993 (75)  
    • Ray Kurzweil , a futurist, based a variant of Moore’s called the Law of Accelerating  Returns predicted: 
      • computer will pass the Turing test in 2029 (75) 
      • new reality known as singularity will happen in 2045 (75) 
      • he predicts that the machine will exceed human intelligence 75 as they will be able to:
        • Reprogram themselves 75  
        • download information 75  
        • and continue improving 75   
  • Ethics  
    • Isaac Asimov’s Three Laws of Robotics: 
      1. A robot may not injure a human 84  
      2. A robot must obey human orders unless they harm people 84  
      3. A robot must protect itself unless it breaks rules one and two 84 
  • Comments on artificial intelligence: 
    • Stephen Hawking (British physicist) – feared “the age of super smart machines can threaten human existence on earth” 6 
    • Raymond Kurzweil (director of engineering at Google) – predicted that AI will improve life for all humanity and extend human lifespan indefinitely  6 
    • James Barrat – “…[AI will become] a demon”  79 
  • Additional readings that seem interesting
    • Runaround by Isaac Asimov  
    • Universal Robots by Rossum  

Design a site like this with WordPress.com
Get started