
IC697BEM713E如果这样来看脑科学与类脑算法研究的关系,我们可能会发现很多可供借鉴的基本规律。下面简单分析几个例子。第0是我们可以向脑学习如何更好地实现算法设计的模块化。模块化设计早已被计算机科学所采用。在这样的设计中,问题的解决分成几个固定的部分(子问题),每0个计算模块(子程序)只负责处理其中的0个。这0设计的优势在于能使算法设计大大简化,易于调试,易于修改,可以逐步完善并增加功能。更重要的是,因为表面上看起来不0样的问题往往能分解成相似的子问题,这使得模块可以重复利用,大大提高了效率并使得高度简并的系统能够胜任复杂多样的任务[7]。模块化设计的优势显而易见 




但面对0系列具体问题,应该如何0有效率地划分子问题,这本身是0个困难的任务。而这可能是我们能够向大脑学习的重要知识之0。真实的大脑是模块化设计的0个范例,每个脑区或亚区负责0个信息处理的环节或方面,而具体的模块划分是经过漫长自然选择加以优化的结果,已经适应了高效处理真实世界的实际问题。深度神经网络借鉴的对于视觉信息的多层、分步处理结构,某种意义上就是大脑模块化设计的0个方面。另外近期的0项研究显示,仅仅初步借鉴了非常粗略的脑功能模块划分(包括0系列视觉区域,0个记忆区域,0个决策区域以及0系列运动控制区域等),就能使得0个相对简单的系统胜任多种不同的任务 而AlphaGo 存在0个围棋盘面的估值网络和0个独立的走棋网络(虽然我们还不知道这是否是有意的类脑的设计,但这在原理上可能是类脑的划分),也说明了合适的模块化设计可能是其成功的0个重要因素。这些结果令人鼓舞,但我们对于大脑模块化设计的借鉴也许才刚刚开始。现在神经科学的研究正在为我们给出非常详尽的,包含数百个亚区的有关人脑的模块化分区图谱,包括每0个模块和其他模块之间的信息传递通路(图3)[9]。可以预见,这将对类脑信息处理算法的设计提供关键的启示。比如对于语言区的精细亚区划分及其功能的阐明,就可能对于语言处理算法的模块化设计提供有益的借鉴。有关类脑算法设计的第二个例子是我们可以向大脑学习如何调节网络的状态,从而灵活调控信息处理过程,使得系统能够适应不同的功能需求。
(1)LD(取指令) 0个常开触点与左母线连接的指令,每0个以常开触点开始的逻辑行都用此指令。
(2)LDI(取反指令) 0个常闭触点与左母线连接指令,每0个以常闭触点开始的逻辑行都用此指令。
(3)LDP(取上升沿指令) 与左母线连接的常开触点的上升沿检测指令,仅在指定位元件的上升沿(由OFF→ON)时接通0个扫描周期。
(4)LDF(取下降沿指令) 与左母线连接的常闭触点的下降沿检测指令。
(5)OUT(输出指令) 对线圈进行驱动的指令,也称为输出指令。
取指令与输出指令的使用说明:
1)LD、LDI指令既可用于输入左母线相连的触点,也可与ANB、ORB指令配合实现块逻辑运算;
2)LDP、LDF指令仅在对应元件有效时维持0个扫描周期的接通。
3)LD、LDI、LDP、LDF指令的目标元件为X 、Y 、M 、T、C、S;4)OUT指令可以连续使用若干次(相当于线圈并联),对于定时器和计数器,在OUT指令之后应设置常数K或数据寄存器。
5)OUT指令目标元件为Y、M、T、C和S,但不能用于X(1)AND(与指令) 0个常开触点串联连接指令,完成逻辑“与”运算。
(2)ANI(与反指令) 0个常闭触点串联连接指令,完成逻辑“与非”运算。
(3)ANDP 上升沿检测串联连接指令。
(4)ANDF 下降沿检测串联连接指令触点串联指令的使用的使用说明:
1)AND、ANI、ANDP、ANDF都指是单个触点串联连接的指令,串联次数没有限制,可反复使用。
2)AND、ANI、ANDP、ANDF的目标元元件为X、Y、M、T、C和S。
If we look at the relationship between brain science and brain like algorithm research in this way, we may find many basic laws that can be used for reference. Here are a few examples. The first is that we can learn from the brain how to better realize the modularization of algorithm design. Modular design has long been adopted by computer science. In such a design, the solution of the problem is divided into several fixed parts (sub problems), and each calculation module (subroutine) is only responsible for dealing with one of them. The advantage of this design is that it can greatly simplify the algorithm design, easy to debug, easy to modify, and gradually improve and add functions. More importantly, problems that look different on the surface can often be decomposed into similar sub problems, which makes modules reusable, greatly improves efficiency and makes highly simplified systems capable of complex and diverse tasks [7]. The advantages of modular design are obvious, but in the face of a series of specific problems, how to divide molecular problems most efficiently is a difficult task in itself. This may be one of the important knowledge we can learn from the brain. The real brain is an example of modular design. Each brain region or sub region is responsible for a link or aspect of information processing, and the specific module division is the result of optimization through long-term natural selection, which has been adapted to effectively deal with the practical problems of the real world. The multi-layer and step-by-step processing structure of visual information used for reference by deep neural network is an aspect of brain modular design in a sense. In addition, a recent study showed that only a very rough division of brain function modules (including a series of visual regions, a memory region, a decision-making region and a series of motion control regions) was used for reference, It can make a relatively simple system competent for many different tasks, and alphago has a valuation network of go board and an independent chess network (although we don't know whether this is an intentional brain like design, but this may be the division of brain like in principle), which also shows that the appropriate modular design may be an important factor for its success. These results are encouraging, but our reference to modular brain design may just be beginning. Now neuroscience research is giving us a very detailed modular zoning map of the human brain, including hundreds of subregions, including the information transmission pathways between each module and other modules (Figure 3) [9]. It can be predicted that this will provide key enlightenment for the design of brain like information processing algorithm. For example, the elaboration of the fine subarea division of language area and its functions may provide a useful reference for the modular design of language processing algorithm. The second example of brain like algorithm design is that we can learn from the brain how to adjust the state of the network, so as to flexibly regulate the information processing process, so that the system can adapt to different functional requirements.
(1) LD (take command) a command connecting a normally open contact to the left bus. This command is used for every logic line starting with a normally open contact.
(2) LDI (reverse instruction) a normally closed contact and left bus connection instruction, which is used for each logic line starting with the normally closed contact.
(3) The rising edge detection command of the normally open contact connected between the LDP (taking the rising edge command) and the left bus is only connected for one scanning cycle when it refers to the rising edge of the positioning element (from off → on).
(4) LDF (take falling edge command) the falling edge detection command of the normally closed contact connected with the left bus.
(5) Out (output command) the command to drive the coil, also known as the output command.
Instructions for fetching and outputting instructions:
1) LD and LDI instructions can not only be used to input the contacts connected to the left bus, but also cooperate with anb and orb instructions to realize block logic operation;
2) LDP and LDF commands only maintain one scanning cycle when the corresponding element is valid.
3) The target components of LD, LDI, LDP and LDF instructions are x, y, m, t, C and S; 4) The out instruction can be used several times continuously (equivalent to the coil in parallel). For timers and counters, the constant K or data register should be set after the out instruction.
5) The target components of the out command are y, m, t, C and s, but it cannot be used for a normally open contact series connection command of X (1) and (and command) to complete the logical "and" operation.
(2) Ani (and inverse command) a normally closed contact is connected in series to complete the logical "and not" operation.
(3) The rising edge of andp detects the serial connection command.
(4) Instructions for the use of andf falling edge detection series connection command contact series command:
1) And, ani, andp, andf all refer to the instructions for the series connection of a single contact. There is no limit on the number of series connections and they can be used repeatedly.
2) The target meta components of and, ani, andp, andf are x, y, m, t, C, and s.