In the Session Long Project for this course, you are working with several classmates as a virtual team on a project. This project involves the assessment of both group and individual work. Your team’s process and task output will serve as the material for this SLP. Remember to respond to the Discussion that is set up for your team.
Keys to the Assignment
By the end of this module, submit your Team SLP Assignment to include a 2- to 3-page team paper with at least two scholarly sources that address the following:
Identify at least two communication and two task norms that operate in your team. Remember that norms can be “unspoken.”
What problems/limitations did you run across due to a lack of norms governing your team’s behavior?
Did you try anything to overcome the obstacles? How did it work?
What norms do you think your team should have established to make your work go more smoothly?
00) to resulting accuracies (Figure 7). Each algorithm was allowed exactly 2 seconds to run. It is immediately evident that Simulated Annealing scales the most effectively; aside from its curve staying remarkably flat throughout the bit count increases, no other algorithm comes close to its accuracy – remaining near 100% throughout the experiment. It’s apparent that Simulated Annealing (and Randomized Hill Climbing too) scale well with the size of a problem’s search space. Alternatively, the Genetic Algorithm and MIMIC both strong inverse correlation to the increase in the problem’s bit count (or search space). This is almost certainly due to the fact that these algorithms rely on keeping a certain number of hypotheses within memory (the population size for Genetic Algorithms and the sample count for MIMIC), and the algorithms are therefore unable to scale their scopes to larger search spaces. Different results would likely ensue if we were to tweak these parameters, but this would not be productive given the staggering effectiveness of simulated annealing in this case. Figure 7. Optimization algorithm fitness compared to Flip Flop problem bit count (N). Using hyperparameters listed in Table 5 and 2 seconds of iterations. Next, we ran our efficiency experiment. We fixed the number of iterations to 5,000 and ran each of our algorithms on a Flip Flop problem with N = 100 (Figure 8). At first glance, it may seem as though MIMIC has the upper hand, especially in the beginning. However, given that MIMIC’s runtime throughout this experiment was 12.871 seconds compared to Simulated Annealing’s 0.009, the figure doesn’t seem as impressive – especially as the algorithms converge to very similar values toward the final iterations. Ultimately, it is clear that Simulated Annealing’s strategy of effective early search space exploration give it an advantage in optimizing the Flip Flop problem, where a few bad bit flips could make it hard to escape a local minima. Figure 8. Flip Flop fitness results compared to optimization algorithm iterations. Knapsack Problem – MIMIC The Knapsack problem is another NP-complete optimization problem, involving the determining of the most efficient way to place a set of objects with weights and values into a ‘knapsack’ with a specified weight limit. Again, as there’s no known polynomial solver for this problem, we are forced to rely on optimization to approximate a solution. Below, we will determine the best algorithm for doing so. We follow the same system of experiments used previously; one to determine the algorithm’s performance under increased complexity, and another to determine its fitness over increased iterations. Our optimization algorithms will utilize the hyperparameters predetermined by ABAGAIL, w>GET ANSWER