AI research atlas / v2
Learn AI papers in the right order.
Start with landmark ideas, move through foundations, then branch into LLMs, GenAI, agents, systems, and safety with a reading path that keeps the field from feeling random.
Build the mental timeline before going deep.
Move from foundations to modern systems.
Learning path
Where to start, and what to read next
Orientation / 1-2 weeks
Start Here
Read the papers everyone keeps referencing so the rest of the map has anchors.
Foundations / 2-4 weeks
Classical ML
Learn the statistical and probabilistic ideas that still sit under modern models.
Foundations / 1-2 weeks
Optimization
Understand the training mechanics behind gradient-based learning.
Builder / 3-5 weeks
Deep Learning Core
Move through representation learning, CNNs, residual networks, and scaling patterns.
Builder / 3-6 weeks
Sequence Models and LLMs
Study attention, transformers, language modeling, instruction tuning, and evaluation.
Specialist / 3-6 weeks
Generative AI
Compare GANs, diffusion, autoregressive generation, and modern GenAI workflows.
Specialist / 2-4 weeks
Multimodal and Retrieval
Connect language with images, retrieval, embeddings, and real-world knowledge access.
Specialist / 3-5 weeks
RL and Agents
Learn decision making, feedback, policy learning, and agent-style systems.
Practitioner / 2-4 weeks
Systems and Scaling
Understand the infrastructure and engineering papers behind large-scale training.
Practitioner / 2-4 weeks
Safety and Interpretability
Study robustness, alignment, transparency, and how to reason about model behavior.
Learning Paradigms
Trust and Deployment
Research library
Computer Vision
Showing papers for this learning path. Open any paper card to read the full paper and related resources.
Deep Residual Learning for Image Recognition
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
ImageNet classification with deep convolutional neural networks
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Very Deep Convolutional Networks for Large-Scale Image Recognition
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
AI-Assisted Pipeline for Dynamic Generation of Trustworthy Health Supplement Content at Scale
Although geospatial question answering systems have received increasing attention in recent years, existing prototype systems struggle to properly answer qualitative spatial questions. In this work, we propose a unique framework for answering qualitative spatial questions, which comprises three main components: a geoparser that takes the input questions and extracts place semantic information from text, a reasoning system which is embedded with a crisp reasoner, and finally, answer extraction, which refines the solution space and generates final answers. We present an experimental design to evaluate our framework for point-based cardinal direction calculus (CDC) relations by developing an automated approach for generating three types of synthetic qualitative spatial questions. The initial evaluations of generated answers in our system are promising because a high proportion of answers were labelled correct.
Mask R-CNN
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.
Random sample consensus
A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
High-Resolution Image Synthesis with Latent Diffusion Models
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve new state of the art scores for image inpainting and class-conditional image synthesis and highly competitive performance on various tasks, including unconditional image generation, text-to-image synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.
A survey on Image Data Augmentation for Deep Learning
Deep convolutional neural networks have performed remarkably well on many Computer Vision tasks. However, these networks are heavily reliant on big data to avoid overfitting. Overfitting refers to the phenomenon when a network learns a function with very high variance such as to perfectly model the training data. Unfortunately, many application domains do not have access to big data, such as medical image analysis. This survey focuses on Data Augmentation, a data-space solution to the problem of limited data. Data Augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better Deep Learning models can be built using them. The image augmentation algorithms discussed in this survey include geometric transformations, color space augmentations, kernel filters, mixing images, random erasing, feature space augmentation, adversarial training, generative adversarial networks, neural style transfer, and meta-learning. The application of augmentation methods based on GANs are heavily covered in this survey. In addition to augmentation techniques, this paper will briefly discuss other characteristics of Data Augmentation such as test-time augmentation, resolution impact, final dataset size, and curriculum learning. This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing Data Augmentation. Readers will understand how Data Augmentation can improve the performance of their models and expand limited datasets to take advantage of the capabilities of big data.
Segment Anything
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive – often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at segment-anything.com to foster research into foundation models for computer vision. We recommend reading the full paper at: arxiv.org/abs/2304.02643.
Machine Learning: Algorithms, Real-World Applications and Research Directions
In the current age of the Fourth Industrial Revolution (4IR or Industry 4.0), the digital world has a wealth of data, such as Internet of Things (IoT) data, cybersecurity data, mobile data, business data, social media data, health data, etc. To intelligently analyze these data and develop the corresponding smart and automated applications, the knowledge of artificial intelligence (AI), particularly, machine learning (ML) is the key. Various types of machine learning algorithms such as supervised, unsupervised, semi-supervised, and reinforcement learning exist in the area. Besides, the deep learning, which is part of a broader family of machine learning methods, can intelligently analyze the data on a large scale. In this paper, we present a comprehensive view on these machine learning algorithms that can be applied to enhance the intelligence and the capabilities of an application. Thus, this study’s key contribution is explaining the principles of different machine learning techniques and their applicability in various real-world application domains, such as cybersecurity systems, smart cities, healthcare, e-commerce, agriculture, and many more. We also highlight the challenges and potential research directions based on our study. Overall, this paper aims to serve as a reference point for both academia and industry professionals as well as for decision-makers in various real-world situations and application areas, particularly from the technical point of view.
Artificial Intelligence, Machine Learning and Deep Learning in Advanced Robotics, A Review
No abstract available yet.
RCSB Protein Data Bank (RCSB.org): delivery of experimentally-determined PDB structures alongside one million computed structure models of proteins from artificial intelligence/machine learning
Abstract The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), founding member of the Worldwide Protein Data Bank (wwPDB), is the US data center for the open-access PDB archive. As wwPDB-designated Archive Keeper, RCSB PDB is also responsible for PDB data security. Annually, RCSB PDB serves >10 000 depositors of three-dimensional (3D) biostructures working on all permanently inhabited continents. RCSB PDB delivers data from its research-focused RCSB.org web portal to many millions of PDB data consumers based in virtually every United Nations-recognized country, territory, etc. This Database Issue contribution describes upgrades to the research-focused RCSB.org web portal that created a one-stop-shop for open access to ∼200 000 experimentally-determined PDB structures of biological macromolecules alongside >1 000 000 incorporated Computed Structure Models (CSMs) predicted using artificial intelligence/machine learning methods. RCSB.org is a ‘living data resource.’ Every PDB structure and CSM is integrated weekly with related functional annotations from external biodata resources, providing up-to-date information for the entire corpus of 3D biostructure data freely available from RCSB.org with no usage limitations. Within RCSB.org, PDB structures and the CSMs are clearly identified as to their provenance and reliability. Both are fully searchable, and can be analyzed and visualized using the full complement of RCSB.org web portal capabilities.
Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing: What Do These Terms Mean and How Will They Impact Health Care?
This article was presented at the 2017 annual meeting of the American Association of Hip and Knee Surgeons to introduce the members gathered as the audience to the concepts behind artificial intelligence (AI) and the applications that AI can have in the world of health care today. We discuss the origin of AI, progress to machine learning, and then discuss how the limits of machine learning lead data scientists to develop artificial neural networks and deep learning algorithms through biomimicry. We will place all these technologies in the context of practical clinical examples and show how AI can act as a tool to support and amplify human cognitive functions for physicians delivering care to increasingly complex patients. The aim of this article is to provide the reader with a basic understanding of the fundamentals of AI. Its purpose is to demystify this technology for practicing surgeons so they can better understand how and where to apply it.
Building Trust in Artificial Intelligence, Machine Learning, and Robotics
No abstract available yet.
Artificial intelligence, machine learning and health systems
Artificial Intelligence and machine learning have the potential to be the catalyst for transformation of health systems to improve efficiency and effectiveness, create headroom for universal health coverage and improve outcomes
Ethical and Bias Considerations in Artificial Intelligence (AI)/Machine Learning.
As artificial intelligence (AI) gains prominence in pathology and medicine, the ethical implications and potential biases within such integrated AI models will require careful scrutiny. Ethics and bias are important considerations in our practice settings, especially as increased number of machine learning (ML) systems are being integrated within our various medical domains. Such machine learning based systems, have demonstrated remarkable capabilities in specified tasks such as but not limited to image recognition, natural language processing, and predictive analytics. However, the potential bias that may exist within such AI-ML models can also inadvertently lead to unfair and potentially detrimental outcomes. The source of bias within such machine learning models can be due to numerous factors but can be typically put in three main buckets (data bias, development bias and interaction bias). These could be due to the training data, algorithmic bias, feature engineering and selection issues, clinical and institutional bias (i.e. practice variability), reporting bias, and temporal bias (i.e. changes in technology, clinical practice or disease patterns). Therefore despite the potential of these AI-ML applications, their deployment in our day to day practice also raises noteworthy ethical concerns. To address ethics and bias in medicine, a comprehensive evaluation process is required which will encompass all aspects such systems, from model development through clinical deployment. Addressing these biases is crucial to ensure that AI-ML systems remain fair, transparent, and beneficial to all. This review will discuss the relevant ethical and bias considerations in AI-ML specifically within the pathology and medical domain.
Artificial intelligence, machine learning and deep learning
No abstract available yet.
Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda
The exponential advancement in artificial intelligence (AI), machine learning, robotics, and automation are rapidly transforming industries and societies across the world. The way we work, the way we live, and the way we interact with others are expected to be transformed at a speed and scale beyond anything we have observed in human history. This new industrial revolution is expected, on one hand, to enhance and improve our lives and societies. On the other hand, it has the potential to cause major upheavals in our way of life and our societal norms. The window of opportunity to understand the impact of these technologies and to preempt their negative effects is closing rapidly. Humanity needs to be proactive, rather than reactive, in managing this new industrial revolution. This article looks at the promises, challenges, and future research directions of these transformative technologies. Not only are the technological aspects investigated, but behavioral, societal, policy, and governance issues are reviewed as well. This research contributes to the ongoing discussions and debates about AI, automation, machine learning, and robotics. It is hoped that this article will heighten awareness of the importance of understanding these disruptive technologies as a basis for formulating policies and regulations that can maximize the benefits of these advancements for humanity and, at the same time, curtail potential dangers and negative impacts.
A Review of Further Directions for Artificial Intelligence, Machine Learning, and Deep Learning in Smart Logistics
Industry 4.0 concepts and technologies ensure the ongoing development of micro- and macro-economic entities by focusing on the principles of interconnectivity, digitalization, and automation. In this context, artificial intelligence is seen as one of the major enablers for Smart Logistics and Smart Production initiatives. This paper systematically analyzes the scientific literature on artificial intelligence, machine learning, and deep learning in the context of Smart Logistics management in industrial enterprises. Furthermore, based on the results of the systematic literature review, the authors present a conceptual framework, which provides fruitful implications based on recent research findings and insights to be used for directing and starting future research initiatives in the field of artificial intelligence (AI), machine learning (ML), and deep learning (DL) in Smart Logistics.
Has the Future Started? The Current Growth of Artificial Intelligence, Machine Learning, and Deep Learning
In the modern era, many terms related to artificial intelligence, machine learning, and deep learning are widely used in domains such as business, healthcare, industries, and military. In these fields, the accurate prediction and analysis of data are crucial, regardless of how large the data are. However, using big data is confusing due to the rapid growth and massive development in public life, which requires a tremendous human effort in order to deal with such type of data and extract worthy information from it. Thus, the role of artificial intelligence begins in analyzing big data based on scientific techniques, especially in machine learning, whereby it can identify patterns of decision-making and reduce human intervention. In this regard, the significance role of artificial intelligence, machine learning and deep learning is growing rapidly. In this article, the authors decide to highlight these sciences by discussing how to develop and apply them in many decision-making domains. In addition, the influence of artificial intelligence in healthcare and the gains this science provides in the face of the COVID-19 pandemic are highlighted. This article concludes that these sciences have a significant impact, especially in healthcare, as well as the ability to grow and improve their methodology in decision-making. Additionally, artificial intelligence is a vital science, especially in the face of COVID-19.
Artificial Intelligence, Machine Learning, and Deep Learning in Structural Engineering: A Scientometrics Review of Trends and Best Practices
No abstract available yet.
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device
Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition.
Promising Artificial Intelligence‐Machine Learning‐Deep Learning Algorithms in Ophthalmology
Abstract: The lifestyle of modern society has changed significantly with the emergence of artificial intelligence (AI), machine learning (ML), and deep learning (DL) technologies in recent years. Artificial intelligence is a multidimensional technology with various components such as advanced algorithms, ML and DL. Together, AI, ML, and DL are expected to provide automated devices to ophthalmologists for early diagnosis and timely treatment of ocular disorders in the near future. In fact, AI, ML, and DL have been used in ophthalmic setting to validate the diagnosis of diseases, read images, perform corneal topographic mapping and intraocular lens calculations. Diabetic retinopathy (DR), age‐related macular degeneration (AMD), and glaucoma are the 3 most common causes of irreversible blindness on a global scale. Ophthalmic imaging provides a way to diagnose and objectively detect the progression of a number of pathologies including DR, AMD, glaucoma, and other ophthalmic disorders. There are 2 methods of imaging used as diagnostic methods in ophthalmic practice: fundus digital photography and optical coherence tomography (OCT). Of note, OCT has become the most widely used imaging modality in ophthalmology settings in the developed world. Changes in population demographics and lifestyle, extension of average lifespan, and the changing pattern of chronic diseases such as obesity, diabetes, DR, AMD, and glaucoma create a rising demand for such images. Furthermore, the limitation of availability of retina specialists and trained human graders is a major problem in many countries. Consequently, given the current population growth trends, it is inevitable that analyzing such images is time‐consuming, costly, and prone to human error. Therefore, the detection and treatment of DR, AMD, glaucoma, and other ophthalmic disorders through unmanned automated applications system in the near future will be inevitable. We provide an overview of the potential impact of the current AI, ML, and DL methods and their applications on the early detection and treatment of DR, AMD, glaucoma, and other ophthalmic diseases.
The state-of-the-art on Intellectual Property Analytics (IPA): A literature review on artificial intelligence, machine learning and deep learning methods for analysing intellectual property (IP) data
Abstract Big data is increasingly available in all areas of manufacturing and operations, which presents an opportunity for better decision making and discovery of the next generation of innovative technologies. Recently, there have been substantial developments in the field of patent analytics, which describes the science of analysing large amounts of patent information to discover trends. We define Intellectual Property Analytics (IPA) as the data science of analysing large amount of IP information, to discover relationships, trends and patterns for decision making. In this paper, we contribute to the ongoing discussion on the use of intellectual property analytics methods, i.e artificial intelligence methods, machine learning and deep learning approaches, to analyse intellectual property data. This literature review follows a narrative approach with search strategy, where we present the state-of-the-art in intellectual property analytics by reviewing 57 recent articles. The bibliographic information of the articles are analysed, followed by a discussion of the articles divided in four main categories: knowledge management, technology management, economic value, and extraction and effective management of information. We hope research scholars and industrial users, may find this review helpful when searching for the latest research efforts pertaining to intellectual property analytics.
FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape
As artificial intelligence (AI) has been highly advancing in the last decade, machine learning (ML)-enabled medical devices are increasingly used in healthcare. In this study, we collected publicly available information on AI/ML-enabled medical devices approved by the FDA in the United States, as of the latest update on 19 October 2023. We performed comprehensive analysis of a total of 691 FDA-approved artificial intelligence and machine learning (AI/ML)-enabled medical devices and offer an in-depth analysis of clearance pathways, approval timeline, regulation type, medical specialty, decision type, recall history, etc. We found a significant surge in approvals since 2018, with clear dominance of the radiology specialty in the application of machine learning tools, attributed to the abundant data from routine clinical data. The study also reveals a reliance on the 510(k)-clearance pathway, emphasizing its basis on substantial equivalence and often bypassing the need for new clinical trials. Also, it notes an underrepresentation of pediatric-focused devices and trials, suggesting an opportunity for expansion in this demographic. Moreover, the geographical limitation of clinical trials, primarily within the United States, points to a need for more globally inclusive trials to encompass diverse patient demographics. This analysis not only maps the current landscape of AI/ML-enabled medical devices but also pinpoints trends, potential gaps, and areas for future exploration, clinical trial practices, and regulatory approaches. In conclusion, our analysis sheds light on the current state of FDA-approved AI/ML-enabled medical devices and prevailing trends, contributing to a wider comprehension.
Artificial Intelligence, Machine Learning and Deep Learning: Potential Resources for the Infection Clinician.
BACKGROUND Artificial intelligence (AI), machine learning and deep learning (including generative AI) are increasingly being investigated in the context of research and management of human infection. OBJECTIVES We summarise recent and potential future applications of AI and its relevance to clinical infection practice. METHODS 1,617 PubMed results were screened, with priority given to clinical trials, systematic reviews and meta-analyses. This narrative review focusses on studies using prospectively collected real-world data with clinical validation, and on research with translational potential, such as novel drug discovery and microbiome-based interventions. RESULTS There is some evidence of clinical utility of AI applied to laboratory diagnostics (e.g. digital culture plate reading, malaria diagnosis, antimicrobial resistance profiling), clinical imaging analysis (e.g. pulmonary tuberculosis diagnosis), clinical decision support tools (e.g. sepsis prediction, antimicrobial prescribing) and public health outbreak management (e.g. COVID-19). Most studies to date lack any real-world validation or clinical utility metrics. Significant heterogeneity in study design and reporting limits comparability. Many practical and ethical issues exist, including algorithm transparency and risk of bias. CONCLUSIONS Interest in and development of AI-based tools for infection research and management are undoubtedly gaining pace, although the real-world clinical utility to date appears much more modest.
Artificial intelligence, machine learning, and deep learning in liver transplantation.
Liver transplantation (LT) is a life-saving treatment for individuals with end-stage liver disease. The management of LT recipients is complex, predominantly because of the need to consider demographic, clinical, laboratory, pathology, imaging, and omics data in the development of an appropriate treatment plan. Current methods to collate clinical information are susceptible to some degree of subjectivity; thus, clinical decision-making in LT could benefit from the data-driven approach offered by artificial intelligence (AI). Machine learning and deep learning could be applied in both the pre- and post-LT settings. Some examples of AI applications pre-transplant include optimising transplant candidacy decision-making and donor-recipient matching to reduce waitlist mortality and improve post-transplant outcomes. In the post-LT setting, AI could help guide the management of LT recipients, particularly by predicting patient and graft survival, along with identifying risk factors for disease recurrence and other associated complications. Although AI shows promise in medicine, there are limitations to its clinical deployment which include dataset imbalances for model training, data privacy issues, and a lack of available research practices to benchmark model performance in the real world. Overall, AI tools have the potential to enhance personalised clinical decision-making, especially in the context of liver transplant medicine.
Artificial intelligence/machine learning in respiratory medicine and potential role in asthma and COPD diagnosis.
Artificial intelligence (AI) and machine learning, a subset of AI, are increasingly utilized in medicine. AI excels at performing well-defined tasks, such as image recognition; for example, classifying skin biopsy lesions, determining diabetic retinopathy severity, and detecting brain tumors. This article provides an overview of the use of AI in medicine and particularly in respiratory medicine, where it is used to evaluate lung cancer images, diagnose fibrotic lung disease, and more recently is being developed to aid the interpretation of pulmonary function tests and the diagnosis of a range of obstructive and restrictive lung diseases. The development and validation of AI algorithms requires large volumes of well-structured data, and the algorithms must work with variable levels of data quality. It is important that clinicians understand how AI can function in the context of heterogeneous conditions such as asthma and COPD where diagnostic criteria overlap, how AI use fits into everyday clinical practice and how issues of patient safety should be addressed. AI has a clear role in providing support for doctors in the clinical workplace but its relatively recent introduction means that confidence in its use still has to be fully established. Overall, AI is expected to play a key role in aiding clinicians in the diagnosis and management of respiratory diseases in the future and it will be exciting to the see benefits that arise for patients and doctors from its use in everyday clinical practice.
A Comprehensive Review on Artificial Intelligence/Machine Learning Algorithms for Empowering the Future IoT Toward 6G Era
The evolution of the wireless network systems over decades has been providing new services to the users with the help of innovative network and device technologies. In recent times, the 5G network systems are about to be deployed which creates the opportunity to realize massive connectivity with high throughput, low latency, high energy efficiency and security. It also focuses on providing massive Internet of Things (IoT) network connectivity as well as services for good health, large-scale agricultural and industrial production, intelligent traffic control and electricity generation, transmission and distribution systems. However, the ever-increasing number of user devices is directing the researchers towards beyond 5G systems to allocate these user devices with higher bandwidth. Researches on the 6G wireless network systems have already begun to provide higher bandwidth availability for densely connected larger network devices with QoS surety. Researchers are leveraging artificial intelligence (AI)/machine learning (ML) for enhancing future IoT network operations and services. This paper attempts to discuss AI/ML algorithms that can help in developing energy efficient, secured and effective IoT network operations and services. In particular, our article concentrates on the major issues and factors that influence the design of the communication systems for future IoT with the integration of AI/ML. It also highlights application domains, including smart healthcare, smart agriculture, smart transportation, smart grid and smart industry that can operate efficiently and securely. Finally, this paper ends with the discussion on future research scopes with these algorithms in addressing the open issues of the future IoT network systems.
Artificial intelligence, machine learning and deep learning: definitions and differences
Artificial intelligence (AI) and its application is the next big thing in dermatological imaging, including, but not limited to, image acquisition, processing, interpretation, reporting and follow-up planning. In addition, there are additional benefits of data integration, data storage and data mining. In fact, the possible applications are so many that AI is expected to become an in-separable tool in a dermatologist's life. Most of the dermatologists, however, are still illiterate in AI. This article is protected by copyright. All rights reserved.
Artificial Intelligence, Machine Learning, and Cardiovascular Disease
Artificial intelligence (AI)-based applications have found widespread applications in many fields of science, technology, and medicine. The use of enhanced computing power of machines in clinical medicine and diagnostics has been under exploration since the 1960s. More recently, with the advent of advances in computing, algorithms enabling machine learning, especially deep learning networks that mimic the human brain in function, there has been renewed interest to use them in clinical medicine. In cardiovascular medicine, AI-based systems have found new applications in cardiovascular imaging, cardiovascular risk prediction, and newer drug targets. This article aims to describe different AI applications including machine learning and deep learning and their applications in cardiovascular medicine. AI-based applications have enhanced our understanding of different phenotypes of heart failure and congenital heart disease. These applications have led to newer treatment strategies for different types of cardiovascular diseases, newer approach to cardiovascular drug therapy and postmarketing survey of prescription drugs. However, there are several challenges in the clinical use of AI-based applications and interpretation of the results including data privacy, poorly selected/outdated data, selection bias, and unintentional continuance of historical biases/stereotypes in the data which can lead to erroneous conclusions. Still, AI is a transformative technology and has immense potential in health care.
A Comprehensive Survey: Evaluating the Efficiency of Artificial Intelligence and Machine Learning Techniques on Cyber Security Solutions
Given the continually rising frequency of cyberattacks, the adoption of artificial intelligence methods, particularly Machine Learning (ML), Deep Learning (DL), and Reinforcement Learning (RL), has become essential in the realm of cybersecurity. These techniques have proven to be effective in detecting and mitigating cyberattacks, which can cause significant harm to individuals, organizations, and even countries. Machine learning algorithms use statistical methods to identify patterns and anomalies in large datasets, enabling security analysts to detect previously unknown threats. Deep learning, a subfield of ML, has shown great potential in improving the accuracy and efficiency of cybersecurity systems, particularly in image and speech recognition. On the other hand, RL is again a subfield of machine learning that trains algorithms to learn through trial and error, making it particularly effective in dynamic environments. We also evaluated the usage of ChatGPT-like AI tools in cyber-related problem domains on both sides, positive and negative. This article provides an overview of how ML, DL, and RL are applied in cybersecurity, including their usage in malware detection, intrusion detection, vulnerability assessment, and other areas. The paper also specifies several research questions to provide a more comprehensive framework to investigate the efficiency of AI and ML models in the cybersecurity domain. The state-of-the-art studies using ML, DL, and RL models are evaluated in each Section based on the main idea, techniques, and important findings. It also discusses these techniques’ challenges and limitations, including data quality, interpretability, and adversarial attacks. Overall, the use of ML, DL, and RL in cybersecurity holds great promise for improving the effectiveness of security systems and enhancing our ability to protect against cyberattacks. Therefore, it is essential to continue developing and refining these techniques to address the ever-evolving nature of cyber threats. Besides, some promising solutions that rely on machine learning, deep learning, and reinforcement learning are susceptible to adversarial attacks, underscoring the importance of factoring in this vulnerability when devising countermeasures against sophisticated cyber threats. We also concluded that ChatGPT can be a valuable tool for cybersecurity, but it should be noted that ChatGPT-like tools can also be manipulated to threaten the integrity, confidentiality, and availability of data.
Artificial intelligence, machine learning, and drug repurposing in cancer
ABSTRACT Introduction: Drug repurposing provides a cost-effective strategy to re-use approved drugs for new medical indications. Several machine learning (ML) and artificial intelligence (AI) approaches have been developed for systematic identification of drug repurposing leads based on big data resources, hence further accelerating and de-risking the drug development process by computational means. Areas covered: The authors focus on supervised ML and AI methods that make use of publicly available databases and information resources. While most of the example applications are in the field of anticancer drug therapies, the methods and resources reviewed are widely applicable also to other indications including COVID-19 treatment. A particular emphasis is placed on the use of comprehensive target activity profiles that enable a systematic repurposing process by extending the target profile of drugs to include potent off-targets with therapeutic potential for a new indication. Expert opinion: The scarcity of clinical patient data and the current focus on genetic aberrations as primary drug targets may limit the performance of anticancer drug repurposing approaches that rely solely on genomics-based information. Functional testing of cancer patient cells exposed to a large number of targeted therapies and their combinations provides an additional source of repurposing information for tissue-aware AI approaches.
Convergence of evolving artificial intelligence and machine learning techniques in precision oncology
The confluence of new technologies with artificial intelligence (AI) and machine learning (ML) analytical techniques is rapidly advancing the field of precision oncology, promising to improve diagnostic approaches and therapeutic strategies for patients with cancer. By analyzing multi-dimensional, multiomic, spatial pathology, and radiomic data, these technologies enable a deeper understanding of the intricate molecular pathways, aiding in the identification of critical nodes within the tumor’s biology to optimize treatment selection. The applications of AI/ML in precision oncology are extensive and include the generation of synthetic data, e.g., digital twins, in order to provide the necessary information to design or expedite the conduct of clinical trials. Currently, many operational and technical challenges exist related to data technology, engineering, and storage; algorithm development and structures; quality and quantity of the data and the analytical pipeline; data sharing and generalizability; and the incorporation of these technologies into the current clinical workflow and reimbursement models.
Artificial intelligence, machine learning, computer-aided diagnosis, and radiomics: advances in imaging towards to precision medicine
The discipline of radiology and diagnostic imaging has evolved greatly in recent years. We have observed an exponential increase in the number of exams performed, subspecialization of medical fields, and increases in accuracy of the various imaging methods, making it a challenge for the radiologist to “know everything about all exams and regions”. In addition, imaging exams are no longer only qualitative and diagnostic, providing now quantitative information on disease severity, as well as identifying biomarkers of prognosis and treatment response. In view of this, computer-aided diagnosis systems have been developed with the objective of complementing diagnostic imaging and helping the therapeutic decision-making process. With the advent of artificial intelligence, “big data”, and machine learning, we are moving toward the rapid expansion of the use of these tools in daily life of physicians, making each patient unique, as well as leading radiology toward the concept of multidisciplinary approach and precision medicine. In this article, we will present the main aspects of the computational tools currently available for analysis of images and the principles of such analysis, together with the main terms and concepts involved, as well as examining the impact that the development of artificial intelligence has had on radiology and diagnostic imaging.
Future of Artificial Intelligence (AI) - Machine Learning (ML) Trends in Pathology and Medicine.
Artificial Intelligence (AI) and Machine Learning (ML) are transforming the field of medicine. Healthcare organizations are now starting to establish management strategies for integrating such platforms (AI-ML toolsets) which leverage the computational power of advanced algorithms to analyze data and to provide better insights which ultimately translates to enhanced clinical decision-making and improved patient outcomes. Emerging AI-ML platforms and trends in pathology and medicine are reshaping the field by offering innovative solutions to enhance diagnostic accuracy, operational workflows, clinical decision support, and clinical outcomes. These tools are also increasingly valuable in pathology research where they contribute to automated image analysis, biomarker discovery, drug development, clinical trials, and productive analytics. Other related trends include the adoption of ML-Ops (Machine Learning Operations) for managing models in clinical settings, the application of multimodal and multi-agent AI to utilize diverse data sources, expedited translational research and virtualized education for training and simulation. As the final chapter of our AI educational series, this review article delves into the current adoption, future directions, and transformative potential of AI-ML platforms in pathology and medicine, discussing their applications, benefits, challenges, and future perspectives.
Trends in artificial intelligence, machine learning, and chemometrics applied to chemical data
Abstract Artificial intelligence‐based methods such as chemometrics, machine learning, and deep learning are promising tools that lead to a clearer and better understanding of data. Only with these tools, data can be used to its full extent, and the gained knowledge on processes, interactions, and characteristics of the sample is maximized. Therefore, scientists are developing data science tools mentioned above to automatically and accurately extract information from data and increase the application possibilities of the respective data in various fields. Accordingly, AI‐based techniques were utilized for chemical data since the 1970s and this review paper focuses on the recent trends of chemometrics, machine learning, and deep learning for chemical and spectroscopic data in 2020. In this regard, inverse modeling, preprocessing methods, and data modeling applied to spectra and image data for various measurement techniques are discussed.
Artificial intelligence, machine learning and process automation: existing knowledge frontier and way forward for mining sector
No abstract available yet.