Elementary proof of Funahashi's theorem
Yıl 2024,
Cilt: 7 Sayı: 2, 30 - 44, 15.06.2024
Mitsuo Izuki
Takahiro Noi
Yoshihiro Sawano
,
Hirokazu Tanaka
Öz
Funahashi established that the space of two-layer feedforward neural networks is dense in the space of all continuous functions defined over compact sets in $n$-dimensional Euclidean space. The purpose of this short survey is to reexamine the proof of Theorem 1 in Funahashi \cite{Funahashi}. The Tietze extension theorem, whose proof is contained in the appendix, will be used. This paper is based on harmonic analysis, real analysis, and Fourier analysis. However, the audience in this paper is supposed to be researchers who do not specialize in these fields of mathematics. Some fundamental facts that are used in this paper without proofs will be collected after we present some notation in this paper.
Kaynakça
- D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
- T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
- K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
- A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
- N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
- N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
- A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
- W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
- M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
- A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
- F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
- W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
- D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
- T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.
Yıl 2024,
Cilt: 7 Sayı: 2, 30 - 44, 15.06.2024
Mitsuo Izuki
Takahiro Noi
Yoshihiro Sawano
,
Hirokazu Tanaka
Kaynakça
- D. Beniaguev, I. Segev and M. London: Single cortical neurons as deep artificial neural networks, Neuron, 109 (17) (2021), 2727–2739. e2723.
- T. M. Cover: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE transactions on electronic computers, 14 (3) (1965), 326–334.
- K. Funahashi: On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183–192.
- A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi and M. E. Larkum: Dendritic action potentials and computation in human layer 2/3 cortical neurons, Science, 367 (6473) (2020), 83–87.
- N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: A Global Universality of Two-Layer Neural Networks with ReLU Activations, Journal of Function Spaces, 2021 (2021), Article ID 6637220.
- N. Hatano, M. Ikeda, I. Ishikawa and Y. Sawano: Global universality of the two-layer neural network with the krectified linear unit, Journal of Function Spaces, 2024 (2024), Article ID 3262798.
- A. Krizhevsky, I. Sutskever and G. E. Hinton: Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 25 (2012).
- W. S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 52 (1990), 99–115.
- M. Minsky, S. Papert: Perceptrons: An introduction to computational geometry, Cambridge tiass., HIT, 479 (480) (1969), 104.
- A. B. Novikoff: On convergence proofs on perceptrons, Paper presented at the Proceedings of the Symposium on the Mathematical Theory of Automata (1962).
- F. Rosenblatt: The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65 (6) (1958), 386.
- W. Rudin: Real and Complex Analysis (Third Edition), McGraw-Hill, New York (1987).
- D. E. Rumelhart, D. E. Hinton and G. E. R. J. Williams: Learning representations by back-propagating errors, Nature, 323 (6088) (1986), 533–536.
- T. J. Sejnowski: The unreasonable effectiveness of deep learning in artificial intelligence, Proceedings of the National Academy of Sciences, 117 (48) (2020), 30033–30038.