Group Details Private

administrators

Member List

  • Deep neural networks for protein structure prediction - overview of derivative work

    Papers

    [1] Human mitochondrial protein complexes revealed by large-scale coevolution analysis and deep learning-based structure modeling
    [2] Current protein structure predictors do not produce meaningful folding pathways
    [3] Harnessing protein folding neural networks for peptide-protein docking
    [4] Improved prediction of protein-protein interactions using AlphaFold2
    [5] AlphaFold2: A role for disordered protein prediction?
    [6] AlphaFold2 transmembrane protein structure prediction shines
    [7] Can AlphaFold2 predict protein-peptide complex structures accurately?
    [8] Improved prediction of protein-protein interactions using AlphaFold2
    [9] Possible Implications of AlphaFold2 for Crystallographic Phasing by Molecular Replacement
    [10] Improved Docking of Protein Models by a Combination of Alphafold2 and ClusPro
    [11] Identification of Iron-Sulfur (Fe-S) and Zn-binding Sites Within Proteomes Predicted by DeepMind’s AlphaFold2 Program Dramatically Expands the Metalloproteome

    Glossary

    Intrinsically Disordered Proteins (IDPs) are a large class of proteins without a rigid structure which accomplish their function despite (or thanks to) their dynamic behavior. They can become rigid in complexes with other molecules.

    posted in Structural Biology
  • abzu.ai - something to look at posted in Model Interpretability
  • Precise Counting

    Precise Counting

    I would like to start the topic of precise instance counting in 2D images. It would seem that it is somewhat covered by simple classification using CNNs and/or object detection, however in reality the results provided by both of the former rarely provide an exact enough answer. Are there smart ways to overcome the limitations of the default go-to methods and arrive at accuracy above 99.9% ? This is the question I will try to answer. At first I will just keep adding literature here, before I start writing up some conclusions.

    Task Architecture Best Metric References
    wheat heads CNN MAE 3.85; RMSE 5.19 [1]
    corn plants YoloV3 Accuracy 0.9866 [2]
    pistachios RetinaNet + algorithm Accuracy 0.9475 [3]
    fish CNN Accuracy 0.9755 [4]
    multiple (a review) multiple (a review) nMAE 0.05 - 0.1 [5]
    crops, cells, colonies YOLOv3 F1 0.947 [6]
    plants, UAVs MixNet R² 0.9396 / 0.9875 [7]
    plant leaves Recurrent Attention SBD 0.849 [8]
    todo

    Automated Counting of Colony Forming Units Using Deep Transfer Learning From a Model for Congested Scenes Analysis - [9]

    Non-DL:
    Principal Component-Based Image Segmentation - [10]

    posted in Imaging
  • Quick Overview of Neural Architectures

    Quick Overview of Neural Architectures

    I know it has been done a thousand times before but a Deep Learning forum couldn't potentially do without such a topic 😉 At the same time, it will serve as an explanation of what the sub-categories mean.

    Convolutional Neural Networks (CNNs)

    As the name suggests, all networks which employ the operation of convolution belong to this category (unless a more specific one is available, e.g. Graph Neural Networks). In a first approximation, one can say that a characteristic trait of CNNs is that sets of convolution kernels are applied indiscriminately to each element of input/intermediate layers. These sets usually vary between the layers, aiming to detect more abstract regularities in the data as the depth increases. The three most typical parameters of a CNN layer are the kernel size, dilation and stride. The latter two go beyond the approximation above and spread out kernels in such a way that some elements might be skipped in individual convolutions. CNNs are best known from and used profusely in image classification, object detection and image segmentation. Examples of famous CNNs include ResNet, VGG and Alexnet.

    Transformers (NLP)

    This and the next category might be a bit tricky. Transformers are essentially based on the usage of attention to selectively weight different portions of information and produce new representations in consecutive layers. They are most renowned for their use in Natural Langauge Processing (NLP) with the flagship for a number of years being BERT. The input in this use are encoded representations of words, word parts and special tokens - the so called embeddings. They are then transformed across consecutive layers and - based on their aggregation or a selected token - a classification/regression-type tasks can be performed. Alternatively, the output representations can be fed to a decoder which acts in the opposite direction and for example can output a translation in different language than the original. In the Transformers category I would like to stick to this type of language models, whereas the Attention-based category is dedicated to other uses of the attention mechanism.

    Attention-based

    • TODO

    Recurrent Neural Networks

    • TODO

    Autoencoders

    • TODO

    Bayesian Networks

    • TODO

    Generative Adversarial Networks

    • TODO

    Graph Neural Networks

    • TODO
    posted in Architectures
  • Welcome to deepnn.science!

    Hello and welcome! I am the creator of this forum and would like to dedicate it to networking and discussion around the topic of Deep Learning. The way I would like to position this community is to put enough emphasis on science to go beyond typical Deep Learning websites (both hobby and professional) but offer a tad more relaxed atmosphere than for example ResearchGate. I hope you will take part in this experiment and that together we can create a thriving community above the limits of companies and academia. Feel free to share anything that comes to your mind and let's see if we can have a discussion. Write to you soon!

    posted in General