Chapter 7 goes over how to build out identification of customer needs when it comes to products. Page 86-87 of the textbook covers how to create a Subsystem Architecture and Design Documents. On page 87, Exercise 2 instructs:
Consider a more complicated household item such as a television. Similarly, develop a list of customer needs and system-level requirements.
a. Develop the system and subsystem architecture.
b. Cascade system-level requirements to each subsystem.
perception of them as objects. Notably, the EU recently passed a vote on whether robots should be accorded rights (Bulman, 2017) which affords them blame in situations where things go wrong under their control. In noting that robots are increasingly independent of their engineers and programmers (ibid., 2017), the vote echoes a further concern that would undermine the notion of weak AI as purely an object – that the line dividing weak AI from strong AI is uncharted territory. Intuition in the future as technology grows and changes may not be able to provide us with a reliable guide (Boden, 1988). As the capacity for weak AI to analyse and act on its environment grows, an argument such as Searle’s (1999), that the AI is simply processing an algorithm without any formal understanding, appears susceptible to the criticism of what exactly ‘understanding’ entails (Pinker, 1997). A sufficiently developed weak AI may appear almost negligibly different to a strong AI, exhibiting traits to which humans may accord a certain notion of ‘understanding’, such as acting upon what it analyses a person to want, rather than what they are actually asking for (Saenz, 2010). In this light, it would seem arbitrary to deny it rights simply because it does not ‘think exactly like us’, while exhibiting traits of sentience. We attribute a limited understanding to young children and animals, so why not weak AI? Within the second part of my essay, highlighting the latter case, I will draw attention to the potential distinction between animal rights and the rights of weak AI. Weak AI as an ‘animal level’ intelligence Referring back to my initial description, weak AI lacks what humans would term ‘sentience’. Nonetheless, as I have briefly illustrated throughout the first part of my essay, there appear, contrary to our intuitions, to be some compelling reasons for considering weak AI as an object of at least some moral consideration. Singer’s (2014) case for animal rights can be invoked here in defence of weak AI. Although it is not actually ‘like us’, the fact that humans possess a higher or more refined degree of intelligence does not entail that humans can exploit AI with impunity. As we cannot give moral weight to our own species without committing ‘speciesism’, we appear to not be able to give more moral weight to being an organic life form over a synthetic one without committing to a form of discrimination. Thus >GET ANSWER