BLCC Laboratory works in the brain-like Computing, material data analysis, machine learning and other relative areas.

Brain-like Computing Center

Our brain laboratory is in the initial stage of construction. At present, devices such as electrode cap brain electrical equipment, fNIRS and eyeglass-type eye movement monitors have been introduced.

The mind is the entire human spiritual activity, including emotion, will, feeling, perception, appearance, learning, memory, thinking etc. The human being has been exploring itself, especially to understand the working mechanism of the brain and explore the secrets of the brain.This has always been a human dream.

  • Using brain electrical equipments to explore the sensory perception of the external environment of the brain, that is, such as human attention, learning, memory and decision-making;
  • At the same time, the brain electrical equipment and glasses-type eye tracker equipment will be combined,Experiments explore the physiological responses of humans in multiple directions, such as human fatigue detection;
  • Using fNIRS equipment to explore functional imaging studies of complex brain cognitive activities.

Computing Platform

Object Storage (DS3): An object storage service built on the Minio and S3 protocols. It is a reliable storage service for the platform provided by BLCC Labs that can store large amounts of data, where users can store data and use it on our platform. As a service, DS3 has a lot of features, such as a strong platform applicability, high scalability, low cost and so on.

Cloud Virtual Machine (DC2): A virtualization solution based on KVM and Cloud-init. It is a secure and reliable elastic computing service provided by BLCC Labs in the platform. Users can own several virtual machines in seconds. It facilitates the internal learning and development of students in the laboratory and greatly reduces the cost of software and hardware procurement.

Private Cloud Disk (DNC):: A private cloud disk Service based on Nextcloud. It is suitable for storing and sharing interna data files inside a team, and can be viewed, edited, and coordinated online.

GPU Computing (DGC): A GPU computing services based on Docker and K80 GPU. It is a computing service provided by BLCC Labs for real-time high-speed parallel computing and floating-point computing capabilities, and is suitable for application scenarios, such as deep learning and scientific computing. Effectively liberate the computational pressure within the team and improve the efficiency and competitiveness of computing in scientific research.

Code Hosting (DGit): A code hosting platform based on Gitlab. BLCC Labs provides developers with Git-based online code hosting tools, including code submission, storage, download, reproduction, branch, history, matching, merging and other features. One-stop management of code and code quality, project and project personnel can be completed, which greatly enhances R&D efficiency.

Online programming (DC9): An online programming platform built on Cloud9. It is a professional integrated online programming environment provided by BLCC Labs. It can be used for team online collaborative programming in a team with DGit. It can be directly run as a test environment or deployed to DC2 with one-click.

Machine Learning

Solid solubility, the key concept describing the composition of alloys in materials science, its value can be directly measured by the experiment, but the testing process is complex and cumbersome. In theory, the degree of solid solubility depends on the basic physical properties of alloy components, but the conventional regression method has a poor fitting effect and lacks interpretability. Therefore, a symbolic regression method is used to solve this problem. Symbolic regression aims to use a formula to fit the relationship between the target value and the given attributes. This method is based on a random search of genetic algorithms and we uses a Monte Carlo search tree to optimize the algorithm and search for possible solutions in the solution space. Please refer to Publications for more details.

The fuzzy model is composed of a plurality of linear systems that fit the same nonlinear system, using the fuzzy algorithm to deconstruct the input variables, and then defuzzification through fuzzy calculus reasoning to generate several equations that represent the relationship between each input and output. A non-linear system described by a set of If-then fuzzy rules. Each rule represents a subsystem. Its original form of fuzzy implicit conditional sentence is "If x is M, then y=f(x)", where f (x) is a linear function of x. In the fuzzy model, the key is to generate suitable fuzzy rules, which can be generated through data or expert knowledge. In the application of solid solubility, the main fuzzy criterion is combined with Hume-Rothery to predict the solid solubility. Please refer to Publications for more details.

For high temperature superconductor materials, the structural parameters and electronic parameters are used to predict the performance, and a prediction model for the transition temperature of high temperature superconductors based on machine learning methods is constructed. In this study, a temperature-superconductor transition temperature prediction model PCA-PSO-SVR was established. This model uses Principal Component Analysis (PCA) to perform feature dimensionality reduction to reduce the interdependence of high-temperature superconductor data. At the same time, the particle swarm optimization algorithm (PSO) is used to optimize the support vector regression (SVR) algorithm to improve the prediction accuracy of the model. Finally use the additional high-temperature superconductor data to verify the validity of the model. Please refer to Publications for more details.

Based on ABX3 perovskite material data, combined with features selection and machine learning methods in feature engineering, a prediction model for the ABX3 perovskite band gap was constructed. In this study, we set out from different perspectives the data sets are divided into four different sub-data sets; then we select features for each sub-data set and select the first five features of Pearson's correlation coefficient as features. The results of the selection; finally using a variety of machine learning methods and deep learning methods Tensor Flow on the characteristics of the selected data set and the original data set training, and establish a corresponding ABX3 perovskite gap prediction model. The prediction results obtained by these models are compared to analyze the influence of feature selection on the accuracy of the prediction model. This approach can effectively explore and discover the intrinsic links between material data. Please refer to Publications for more details.



It includes a lot of teachers and students.



[{{item.num}}]. {{item.article}}

Contact Us

Address: Room 608, Building of Computer Engineering and Science, Shanghai University, 333 Nanchen Road, Dachang Town, Baoshan District, Shanghai


Tel:   021-66135551