Chalmers Open Digital Repository
Välkommen till Chalmers öppna digitala arkiv!
Här hittar du:
- Studentarbeten utgivna på lärosätet, såväl kandidatarbeten som examensarbeten på grund- och masternivå
- Digitala specialsamlingar, som t ex Chalmers modellkammare
- Utvalda projektrapporter
Forskningspublikationer, rapporter och avhandlingar hittar du i research.chalmers.se
Enheter i Chalmers ODR
Välj en enhet för att se alla samlingar.
Senast inlagda
LLM-based Log Analysis for Fault Localization in the Automotive Industry
(2025) Ekström, Anton; Rhedin Stam, Hampus
This thesis investigates the application of large language models (LLMs) to aid
practitioners of log analysis for fault localization in the automotive industry. An
existing LLM-based log summarization tool is extended and evaluated, focusing on
the cognitive load of practitioners and how satisfied they are with the tool. The effect
of LLM-based log summarization on productivity of practitioners in the automotive
industry is investigated through a case study at a company within the automotive
industry. Think-aloud sessions and semi-structured interviews are carried out to
asses the impact of the tool on the fault localization flow of study participants.
Results suggest that LLM-generated log summaries can aid practitioners by giving
them a first glance of the issue, thereby potentially reducing manual effort and
improving productivity. However, the results also suggest that the context of the
issue, domain knowledge, and interactivity of the tool plays a major role for success.
A lack of context and means for the practitioner to guide the tool could result in
a less effective workflow with higher cognitive load. The thesis provides insights on
the integration of LLM-based log analysis tools within fault localization workflows
in the industry, highlighting both the benefits and challenges of deploying LLMs in
real-world fault analysis scenarios
Emerging Architectures for Chemical Language Modeling
(2025) Hagström, Ester; Redmo Axelsson, Erik
In recent years, language modeling architectures have become increasingly prominent
in the field of generative chemistry, offering new approaches for the de novo
design and optimization of small molecules. This thesis presents a comparative
study of two emerging architectures: the decoder-only Transformer and the Mamba
architecture, and a conventional Recurrent Neural Network with LSTM cells. The
investigation explores how choices in training data, including a targeted medicinal
dataset (ChEMBL) and a chemically broad dataset (PubChem), as well as data
augmentation via randomized SMILES representations, influence generative capacity
and chemical space coverage. In addition to this, task-specific optimization of
models through reinforcement learning is studied, and the models are compared with
respect to their ability to generate diverse molecules with desired properties.
Through pretraining experiments, it is shown that while the Mamba and RNN architectures
reach their optimum performance significantly faster, the decoder-only
Transformer achieves the highest validity and uniqueness in molecular generation.
Training on PubChem, as opposed to ChEMBL, generally enhances validity and
uniqueness but tends to reduce novelty, indicating a trade-off between chemical
space saturation and innovation. As for data augmentation through randomization
of SMILES, this helped all models refrain from memorizing the dataset, resulting in
higher novelty across architectures and datasets.
Reinforcement learning experiments further reveal that all three architectures are
capable of optimizing toward specific molecular properties, with the decoder-only
Transformer and Mamba each exhibiting distinct strengths depending on the optimization
task. Regarding the pretraining condition’s effect on reinforcement learning,
ChEMBL-trained models outperformed those trained with PubChem on multiple
tasks, and all architectures, but especially Mamba, benefitted from being pretrained
with randomized SMILES. Notably, even reduced-parameter models, such
as a downsized decoder-only Transformer variant, perform competitively relative to
larger architectures.
Applied ratio pyrometry for temperature measurements in industrial-scale flames
(2026) Raffai, Herman
Optimizing Stream Engines for use in eFPGAs on Radiation Hardened SoCs
(2025) Magnusson, Adam; Örtenberg, Erik
Systems on a Chip (SoCs) are becoming increasingly common for use in most computational domains as heterogeneous hardware architectures prove themselves
very efficient and powerful. The space domain is one such example and poses a
plethora of challenging design constraints, which become even more pronounced
in the context of radiation hardened embedded Field Programmable Gate Arrays
(eFPGAs). eFPGAs lend themselves to supporting powerful hardware accelerators
(HAs), where data is streamed in by the use of a stream engine. Due to the relatively
small amount of programmable logic in eFPGAs, the stream engine supporting the
HA must be made as resource efficient as possible. Though, to the best of the
authors knowledge, there is no previous work exploring resource optimized stream
engines for use on eFPGA.
In this thesis, we implement a performance and resource efficient stream engine
for eFPGAs. The proposed stream engine – named GANIMEDE – achieved a
communication link utilization of 94.5% accounted for protocol overheads while only
occupying 2.1% of the total available resources provided by the targeted eFPGA. In
addition, this thesis offers a discussion on desirable properties in stream engines
and presents parts of supported protocols that should be implemented on the SoC
instead of the eFPGA.
Effects of Cognitive Load in Human-AI Requirements Engineering
(2025) Shivamurthy Praveen, Niharika Nandi; Sasvihalli, Laxmi Prashantraddi
As Artificial Intelligence becomes more integrated into software engineering, its role in decision-support systems within Requirements Engineering has grown. However, the cognitive demands placed on users interacting with these AI tools remain underexplored. This thesis investigates how explanation formats offered by Explainable AI affect mental effort, task difficulty, confidence, and correctness during requirements engineering inspired prioritization tasks. Through a controlled experiment with 61 participants, three XAI formats of bar charts, textual explanations, and confidence scores were evaluated across two task pairs of differing complexity. The study examined the influence of task complexity and explanation format, the impact of explanation type on decision-making quality, and whether participant preferences for certain formats aligned with improved performance and lower cognitive strain. Statistical analyses, including Spearman correlation and independent t-tests, revealed that task complexity consistently influenced cognitive load, while explanation format had no clear effect. Additionally, although preferred formats
did not universally enhance task performance, participants who favored confidence scores showed marginally higher correctness and confidence levels. These findings suggest that cognitive effort in AI-assisted requirements engineering tasks is shaped more by task characteristics than explanation format alone, and that tailoring explanations to individual user preferences may offer subtle benefits.
