Company hires your team in order to develop the back-propagation neural network(s) for predicting next-week trend of five stocks (that is, go up, go down, or remain same). In meantime, company also offers you the data for every stock in past 15 years. Every data record comprises of 20 attributes (like revenues, index values, earnings per share, capital investment, etc). Provide answer for the following questions:
1) The team member A proposes that you must design a single neural network which can handle all these stocks. However, member B allows that you have to design five networks (one for each stock). Whom do you consider as correct and also explain why?
2) During training, at a certain point, you recognizes that the error rate (of each training cycle) has been oscillating (that is it reduces in the round n, increases in the round n+1, and decreases again inround n+2, and so on). What might be the reason for this phenomenon and what you should do about it?
3) Supposing you carefully trained the network or networks with all the 10 attributes of data objects for 20,000 cycles ((i.e., epochs) and never observed any oscillation. The total error rate has been reducing all the time till a very, very small value. Thus, everything looks great. Though, when you tested the network(s) along with a new testing data set (or new testing data sets), the outcomes of neural network(s) were very bad (low accuracy). Provide at least two possible reasons and how to prevent them?
4) Explain why utilizing the maximum number of the training epochs or maximum acceptable errors in each epoch as termination condition for BP network training would be problematic? Explain the problem that such termination conditions may result in?