According to the standard cosmological model, 27 % of the Universe consists of some mysterious dark matter, 68 % consists of even more mysterious dark energy, whereas only less than 5 % corresponds to baryonic matter composed from known elementary particles. The main purpose of this paper is to show that the proposed ratio 27 : 5 between the amount of dark matter and baryonic matter is considerably overestimated. Dark matter and partly also dark energy might result from inordinate extrapolations, since reality is identified with its mathematical model. Especially, we should not apply results that were verified on the scale of the Solar System during several hundreds of years to the whole Universe and extremely long time intervals without any bound of the modeling error.
Artificial neural networks (ANNs) have been used to construct empirical nonlinear models of process data. Because networks are not based on the physical theory and contain nonlinearities, their predictions are suspect when extrapolating beyond the range of original training data. Standard networks give no indication of possible errors due to extrapolation. This paper describes a sequential supervised learning scheme for the recently formalized Growing Multi-experts Network (GMN). It is shown that the Certainty Factor can be generated by the GMN that can be taken as an extrapolation detector for the GMN. The On-line GMN identification algorithm is presented and its performance is evaluated. The capability of the GMN to extrapolate is also indicated. Four benchmark experiments are dealt with to demonstrate the effectiveness and utility of the GMN as a universal function approximator.
Wittgenstein describes the process of mastering a rule (adopting a skill) as implanting mechanically a number of specific examples (steps) after which one ''knows how to go on''. Such a two-step concept of learning (e.g. in Cavell) can be understood as the sequence of i) propedeutics limited in time and ii) the subsequent skill to extrapolate the rule in unlimited number of cases (Chomsky’s account of rule). The relationship between the ''propedeutics of examples'' and the mastered skill is, however, more complex. I will refer here to the Wittgensteinian ethics (e.g., Winch) emphasizing the individual’s repeated work (reflection) on specific examples which never ends. I will also point to the empirical evidence (Ingold, in particular) that in the processes of learning an essential role is played by memorizing and copying of given (specific) models, where attention and observation is necessary. Competence is then a physical implantation and individual mastering of such a limited technique, rather than an ability to extrapolate and innovate foremost., Wittgenstein popisuje proces zvládnutí pravidla (osvojení dovednosti) jako mechanického implantování řady konkrétních příkladů (kroků), po nichž ,,ví, jak dál''. Takový dvoukrokový koncept učení (např. V Cavellu) lze chápat jako sekvenci i) propedeutiky časově omezené a ii) následné schopnosti extrapolovat pravidlo v neomezeném počtu případů (Chomskyho pravidlo pravidla). Vztah mezi ,,propedeutikou příkladů'' a zvládnutou dovedností je však složitější. Zmíním se zde o Wittgensteinově etice (např. Winch), která zdůrazňuje opakovanou práci jednotlivce (reflexe) na konkrétních příkladech, které nikdy neskončí. Poukazuji také na empirické důkazy (zejména Ingold), že v procesech učení se základní role hraje zapamatováním a kopírováním daných (specifických) modelů, tam, kde je nutná pozornost a pozorování. Kompetence je pak fyzikální implantace a individuální zvládnutí takové omezené techniky, než schopnost extrapolovat a inovovat především., and Ondřej Beran