Look at Fragmentation, breakage, fragmentation in lithic assemblages and the significance and broader picture. Think about things like: What factors influence fragmentation rates, what are the main reasons behind breakage and fragmentations, what are some problems with fragmentation and breakage in archaeological assemblages (how does this affect how one analyses an assemblage), what are some factors one must take into considerations when dealing with breakage/fragmentation/trampling etc, why is looking at fragmentation and/or breakage in archaeology important, what are the factors that cause breakage and fragmentation etc (environmental, human, erosion, quality of raw material (rock type), the skill/technique of the stone tool knapper etc), what does the type of breakage (for example longitudinal breakage – please read up on this) indicate, what is lacking in the literature, Look at the broader picture. What can studies on this provide/contribute. What can cause high fragmentation rates (sediments, raw material, technique, trampling etc). Why is this literature useful and relevant when looking at my topic?
possible that a certain algorithm has more experience with Asian faces than with Caucasian faces. This unfair representation of the population which the algorithm might me used on, will lead to problems. If you do not include many images from one ethnic subgroup, it won’t perform too well on those groups because Artificial Intelligence learns from the examples it was trained on . In conclusion, the performance of face recognition algorithms suffers from a racial or ethnic bias. The demographic origin of the algorithm, and the demographic structure of the test population has a big influence on the accuracy of the results of the algorithm. This bias is particularly unsettling in the context of the vast racial disparities that already exist in the arrest rates . iii. System still needs a human judge The last problem that will be discussed in this paper is that the technologies that are existing today are far from perfect. Right now, companies are advertising their technologies as “a highly efficient and accurate tool with an identification rate above 95 percent.” (said by Facefirst.) In reality, these claims are almost impossible to verify. The facial-recognition algorithms used by police are not enforced to go through public or independent testing to determine accuracy or check for bias before being deployed on everyday citizens. This means that the companies that are making these claims, can easily revise their results, and change them if they are not high enough . And even if these claims are true, an identification rate of 95 percent is not enough for any system to rely on for society. If a facial recognition system makes a decision (e.g. if a person has committed a crime, by matching the face to e.g. images collected from security cameras), the outcome is purely based on the face features of that specific person. When this same task is given to a human being, the human will base his/her decision on other factors as well (e.g. voice, height, body language, confidence), this makes the decision more authentic. Hence, to make the chances of falsely identifying a person as low as possible, the system will still need a human judge. 4. Ethics>GET ANSWER