Percorrer por autor "Turner, Daniel"
A mostrar 1 - 3 de 3
Resultados por página
Opções de ordenação
- Continual learning for object and scene classificationPublication . Turner, Daniel; Rodrigues, J. M .F.; Cardoso, Pedro J. S.Since their existence, computers have been a great asset to mankind, primarily because of their ability to perform specific tasks at speeds humans could never compete with. However, there are many tasks that humans consider easy which are quite difficult for computers to perform. For instance, a human can be shown a picture of an automobile and a bicycle and then be able to easily discriminate between future automobiles and bicycles. For a computer to perform such a task using current algorithms, typically, it must first be shown a large number of images of the two classes, with varying features and positions, and then spend a great deal of time learning to extract and identify features so that it can successfully distinguish between the two. Nevertheless, it is still able to perform the task (eventually) and, after the computational training is complete, would be able to classify images of automobiles and bicycles faster, and sometimes better, than the human. Nonetheless, the real out-performance displayed by the human is when another class is added to the mix, e.g., “aeroplane”. The human can immediately add aeroplanes to its set of known objects, whereas a computer would typically have to go almost back to the start and re-learn all the classes from scratch. The reason the network requires to be retrained is because of a phenomenon named Catastrophic Forgetting, where the changes made to the system during the acquisition of new knowledge bring about the loss of previous knowledge. In this dissertation, we explore Continual Learning, where we propose a way to deal with Catastrophic Forgetting by making a framework capable of learning new information without having to start from scratch and even “improving” its knowledge on what it already knows. With the above in mind, we implemented a Modular Dynamic Neural Network (MDNN) framework, which is primarily made up of modular sub-networks and progressively grows and re-arranges itself as it learns continuously. The network is structured in such a way that its internal components function independently from one another so that when new information is learned, only specific sub-networks are altered in a way that most of the old information is not forgotten. The network is divided into two main blocks, the feature extraction component which is based on a ResNet50 and the modular dynamic classification sub-networks. We have, so far, achieved results below those of the state of the art using ImageNet and CIFAR10, nevertheless, we demonstrate that the framework can meet its initial purpose, which is learning new information without having to start from scratch.
- Modular dynamic neural network: a continual learning architecturePublication . Turner, Daniel; Cardoso, Pedro; Rodrigues, JoãoLearning to recognize a new object after having learned to recognize other objects may be a simple task for a human, but not for machines. The present go-to approaches for teaching a machine to recognize a set of objects are based on the use of deep neural networks (DNN). So, intuitively, the solution for teaching new objects on the fly to a machine should be DNN. The problem is that the trained DNN weights used to classify the initial set of objects are extremely fragile, meaning that any change to those weights can severely damage the capacity to perform the initial recognitions; this phenomenon is known as catastrophic forgetting (CF). This paper presents a new (DNN) continual learning (CL) architecture that can deal with CF, the modular dynamic neural network (MDNN). The presented architecture consists of two main components: (a) the ResNet50-based feature extraction component as the backbone; and (b) the modular dynamic classification component, which consists of multiple sub-networks and progressively builds itself up in a tree-like structure that rearranges itself as it learns over time in such a way that each sub-network can function independently. The main contribution of the paper is a new architecture that is strongly based on its modular dynamic training feature. This modular structure allows for new classes to be added while only altering specific sub-networks in such a way that previously known classes are not forgotten. Tests on the CORe50 dataset showed results above the state of the art for CL architectures.
- Paragens de autocarro inteligentes e inclusivasPublication . Rodrigues, Joao; Pires Rosa, Manuela; Viegas, Micael; Turner, Daniel; Veiga, Ricardo; Sousa, NelsonApresentam-se diferentes conceitos que determinam uma paragem inteligente e inclusiva, criando-se assim uma paragem de autocarros inovadora desde à geração da sua própria energia, tornando-a autossuficiente e sustentável, à integração na rede de autocarros para comunicação da quantidade de utilizadores à espera, até à criação de um ecrã interativo que se adapta às necessidades de qualquer utilizador através de uma inovadora framework desenvolvida no âmbito do projeto ACCES4ALL.
