当前位置: 首页 > article >正文

An Introduction to RAID in Linux

1. OverviewRAIDstands forRedundantArray ofInexpensive/IndependentDisks.We build our storage with redundancy — duplication of critical functions — so that no one part can fail and bring down our whole system.Because the data reads and writes are spread out over more than one disk, RAID can also provide us performance benefits.Modern filesystems like ZFS and btrfs have built-in RAID functionality. It’s also important we remember what RAID is not: it’s not a backup. For example, if our database gets wiped or corrupted, a mirrored RAID gives us two copies of our blank or broken database. A separate backup gives us a recovery option.In this tutorial, we’ll explore ways to use RAID in Linux.2. Types of RAIDRAID can be implemented with a dedicated hardware controller or entirely in software. Software RAID is more common today.We refer to different kinds of RAID via a standard numbering system of “raid levels”. The numbers do not refer to how many disks are used.RAID’s biggest advantage comes from the replication of data. Our data exists in more than one place on our RAID system, so we can avoid downtime during hardware failure. The replication may be via mirroring (keeping duplicate copies of everything) or parity (checksum calculations of our data).2.1. Hardware vs. SoftwareIn this guide, we’ll explore the RAID options built into Linux via software. Hardware RAID is beyond the scope of this article; just be aware that it is only useful on Linux in special cases, and we may need to turn it off in our computer’s BIOS.2.2. Striped And/or Mirrored (RAID 0, 1, or 10)RAID level 0 has an appropriate number: it has zero redundancy!Instead,in RAID 0, data is written across the drives, or “striped”.This means it can potentially be read from more than one drive concurrently. That can give us a real performance boost.But at the same time, now we have two drives that could fail, taking out all our data. So, RAID 0 is only useful if we want a performance boost but don’t care about long-term storage.We refer to RAID level 1 as “mirrored”because it is created with a pair of equal drives.Each time data is written to a RAID 1 device, it goes to both drives in the pair.Write performance is thus slightly slower, but read performance can be much faster as data is concurrently read from both disks.These two levels of RAID can be combined or nested, creating what’s called RAID 10 or just RAID 10. (There are other permutations, but RAID 10 is the most common.)We can create a RAID 10 device with four disks: one pair of disks in RAID 0, mirroring another pair of disks in RAID 0.This RAID of RAIDs attempts to combine RAID 0’s performance with RAID 1’s redundancy, to be both speedy and reliable.2.3. Parity (RAID 5 or RAID 6)Instead of storing complete copies of our data, we can save space by storing parity data. Parity allows our RAIDs to reconstruct data stored on failed drives.RAID 5 requires at least three equal-size drives to function. In practice, we can add several more, though rarely more than ten are used.RAID 5 sets aside one drive’s worth of space for checksum parity data. It is not all kept on one drive, however; instead, the parity data is striped across all of the devices along with the filesystem data.This means we usually want to build our RAID out of a set of drives of identical size and speed. Adding a larger drive won’t get us more space, as the RAID will just use the size of the smallest member. Similarly, the RAID’s performance will be limited by its slowest member.RAID 5 can recover and rebuild with no data loss if one drive dies. If two or more drives crash, we’ll have to restore the whole thing from backups.RAID 6 is similar to RAID 5 but sets aside two disks’ worth for parity data. That means a RAID 6 can recover from two failed members.RAID 5 gives us more usable storage than mirroring does, but at the price of some performance. A quick way to estimate storage is the total amount of equal-sized drives, minus one drive. For example, if we have 6 drives of 1 terabyte, our RAID 5 will have 5 terabytes of usable space. That’s 83%, compared to 50% of our drives were mirrored in RAID 1.At one time, server manufacturers considered RAID 5 the best practice in storage. It has fallen out of favor to some degree due to the so-called “RAID 5 write hole”, a problem addressed by next-generation filesystems and RAIDZ.3. Linux Kernel RAID(mdraid)Let’s create some new RAIDs with the mdadm tool.3.1. Your Basic RAIDWe’ll start with two identical disks or partitions, and create a striped RAID 0 device.First, let’s make sure we have the correct partitions. We don’t want to destroy something important:# lsblk -o NAME,SIZE,TYPE NAME SIZE TYPE sdb 931.5G disk └─sdb1 4G part sdc 931.5G disk └─sdc1 4G partCopyWe’ll use the mdadm command (multi-disk administrator):# mdadm --verbose --create /dev/md0 --level0 --raid-devices2 /dev/sdb1 /dev/sdc1 mdadm: chunk size defaults to 512K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.CopyOur first RAID device has been created! Let’s break down the options we use withmdadm:–verbosetells us more about what is happening.–createtellsmdadmto create a new RAID device, naming it whatever we want (in this case,md0).–level0is our RAID level, as discussed above. Level 0 is just striped, with no redundancy.–raid-devices2letsmdadmknow to expect two physical disks for this array./dev/sdb1and/dev/sdc1are the two partitions included in our array of independent disks.So our RAID of partitions has been created, but like any device, it does not yet have a filesystem and it hasn’t been mounted.We can look at it again withlsblk:# lsblk -o NAME,SIZE,TYPE NAME SIZE TYPE sdb 931.5G disk └─sdb1 4G part └─md0 8G raid0 sdc 931.5G disk └─sdc1 4G part └─md0 8G raid0CopyNotice how themd0device is the size of the two partitions added together, as we’d expect from RAID 0.3.2. Managing Our RAIDWe also find useful information in/proc/mdstat:# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdc1[1] sdb1[0] 1952448512 blocks super 1.2 512k chunks unused devices: noneCopyTo use this new RAID, we need to format it with a filesystem and mount it:# mkfs /dev/md0 mke2fs 1.46.2 (28-Feb-2021) Creating filesystem with 2094592 4k blocks and 524288 inodes Filesystem UUID: 947484b6-05ff-4d34-a0ed-49ee7c5eebd5 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done # mount /dev/md0 /mnt/myraid/ # df -h /mnt/myraid Filesystem Size Used Avail Use% Mounted on /dev/md0 7.8G 24K 7.4G 1% /mnt/myraidCopyLike any other filesystem besides ZFS, we would add a line to/etc/fstabto make this mount point permanent.If we want to boot from our RAID device (and we may not, to keep things simple), or otherwise allowmdadmto manage the array during startup or shutdown, we can append our array’s info to an optional/etc/mdadm/mdadm.conffile:# mdadm --detail --scan ARRAY /dev/md1 metadata1.2 spares1 namesalvage:1 UUID0c32834c:e5491814:94a4aa96:32d87024CopyAnd if we want to take down our raid, we can usemdadmagain:# mdadm -S /dev/md0 mdadm: stopped /dev/md0CopyWe can create similar RAIDs with variations of the–leveland–raid-devicesoptions.For example, we could create a 5-disk RAID 5:# mdadm --verbose --create /dev/md1 --level5 --raid-devices5 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 4189184K Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started.CopyThen, we canmkfsandmountour latest RAID.3.3. Failed Drives and Hot SparesWhat would happen to our new RAID 5 if one of the drives failed? Let’s simulate that event withmdadm:# mdadm /dev/md1 -f /dev/sdc1 mdadm: set /dev/sdc1 faulty in /dev/md1CopyNow, what does/proc/mdstattell us? Let’s take a look:# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md1 : active raid5 sdf1[5] sde1[3] sdd1[2] sdb1[1] sdc1[0](F) 16756736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU] unused devices: noneCopyHere, we see the partition we selected, marked (F) for failed.We can also askmdadmfor more details of our array:# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Tue Aug 10 14:52:59 2021 Raid Level : raid5 Array Size : 16756736 (15.98 GiB 17.16 GB) Used Dev Size : 4189184 (4.00 GiB 4.29 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Aug 10 14:59:20 2021 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : salvage:1 (local to host salvage) UUID : 0c32834c:e5491814:94a4aa96:32d87024 Events : 24 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 34 1 active sync /dev/sdb1 2 8 35 2 active sync /dev/sdd1 3 8 36 3 active sync /dev/sde1 5 8 37 4 active sync /dev/sdf1 0 8 33 - faulty /dev/sdc1CopyOur RAID is still going strong. A user should not be able to tell any difference. But we can see it’s in a “degraded” state, so we need to replace that faulty hard drive.Let’s say we have a replacement for our dead drive. It should be identical to the originals.We can remove our faulty drive and add a new one. We should remember that the/dev/sd*list of devices will sometimes change if the hardware changes, so double-check withlsblk.First, we remove our faulty drive from the array:# mdadm /dev/md1 --remove /dev/sdc1 mdadm: hot removed /dev/sdc1 from /dev/md1CopyNext, we physically replace our drive and add the new one. (This is where hot-swappable drive hardware saves us a lot of time!)We can look at/proc/mdstatto watch the RAID automatically rebuild:# mdadm /dev/md1 --remove /dev/sdc1 mdadm: hot removed /dev/sdc1 from /dev/md1 rootsalvage:~# mdadm /dev/md1 --add /dev/sdc1 mdadm: added /dev/sdc1 # cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md1 : active raid5 sdc1[6] sdf1[5] sde1[3] sdd1[2] sdb1[1] 16756736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU] [..................] recovery 10.7% (452572/4189184) finish3.4min speed18102K/secCopyIf uptime is really important, we can add a dedicated spare drive to letmdadmautomatically switch over to:# mdadm /dev/md1 --add-spare /dev/sdg1 mdadm: added /dev/sdg1CopyIt might be worth it; we can weigh the time and money involved.Let’s check on our array again:# mdadm --detail /dev/md1 | grep spare 7 8 38 - spare /dev/sdg1CopyFive disks are striped with data and parity. One disk is unused, just waiting to be needed.4. The Logical Volume ManagerMost modern Linux filesystems are no longer created directly on a drive or a partition, but on a logical volume created with the LVM.Briefly, LVM combines Physical Volumes (drives or partitions) into Volume Groups. Volume Groups are pools from which we can create Logical Volumes. We can put filesystems onto these Logical Volumes.RAID comes into it during the creation of Logical Volumes. These may be linear, striped, mirrored, or a more complex parity configuration.We should note that creating a RAID LVM Logical Volume uses Linux kernel RAID (mdraid). If we want the convenience of LVM, being able to expand Volume Groups and resize Logical Volumes, we can have it along with the reliability of simple mdraid.But if LVM sounds like too much added complexity, we can always stick with mdraid on our physical drives.Yet another common option is creating our RAID devices with mdadm and then using them as PVs with LVM.4.1. Telling LVM to Use Our Volumes in a RAIDLVM RAIDs are created at the logical volume level.So that means we need to have first created partitions, used pvcreate to tag them as LVM physical volumes, and used vgcreate to put them into a volume group. In this example, we’ve called the volume groupraid1vg0.The RAID creation step specifies the type of RAID and how many disks to use for mirroring or striping. We don’t need to specify each physical volume. We can let LVM handle all of that:# lvcreate --mirrors 1 --type raid1 -l 100%FREE -n raid01v0 raid1vg0 Logical volume raid01v0 created. # mkfs.ext4 /dev/raid1vg0/raid01v0CopyAs usual, we then format and mount our new RAID volume. If we want a system that handles all of that automatically, we have ZFS.5. Integrated Filesystem RAID with ZFS orbtrfsWe won’t cover the details of next-generation filesystems in this article, but many of the concepts from software RAID and LVM translate over.ZFS uses “vdevs”, virtual devices, much as LVM uses Volume Groups. These vdevs may be physical disks, mirrors, raidz variants (ZFS’s take on RAID 5), or as of OpenZFS 2.1, draid.For example, we can create a RAID 1 mirror zpool:# zpool create -f demo mirror /dev/sdc /dev/sddCopyZFS handles everything else for us, formatting and mounting our new volume pool under/demo.The equivalent in btrfs is:# mkfs.btrfs -L demo -d raid1 /dev/sdc /dev/sddCopyOne major limitation of btrfs is that it does not support RAID5 or RAID 6, at least not reliably. So, we’ll keep that far away from production systems.These next-generation filesystems take care of many of the details of RAID and volume management. In addition, they provide much greater data integrity through block-level checksums.Although they are a whole other topic, we may solve more of our storage problems by investigating ZFS or btrfs.6. Further ReadingThe Linux RAID wiki goes into depth about current issues and best practices.The ArchLinux wiki explains LVM in great detail.The OpenZFS project details how to get started using ZFS on Linux.7. ConclusionWe use RAID for reliability and to limit downtime.In this article, we’ve looked at the building blocks of Linux software RAID (md). We’ve also considered some more complex and advanced additions.There are more details to consider in the day-to-day monitoring and maintenance of our RAID, but this gets us started.

相关文章:

An Introduction to RAID in Linux

1. Overview RAID stands for Redundant Array of Inexpensive/Independent Disks. We build our storage with redundancy — duplication of critical functions — so that no one part can fail and bring down our whole system. Because the data reads and writes are…...

数据结构-双向链表-基础

#include <iostream> #include <stdio.h> #include<stdlib.h>//双向链表存储结构 typedef int ElemType; typedef struct node {ElemType data;struct node* prev, * next; }Node;//初始化 Node* initList() {Node* head (Node*)malloc(sizeof(Node));head-…...

SCM 第二例|三大模型推理性能深度对比:InternLM 效率最高,Qwen 并发增益最强

SCM 第二例|三大模型推理性能深度对比:InternLM 效率最高,Qwen 并发增益最强 引言:从单模型验证到多模型对决 一个月前,我用自研的 叠合一致法(SCM) 完成了首例验证——在 Qwen2.5-7B 上,成功标定出并发增益函数和长度增益系数,实现了 0% 偏差的自洽检验。 但那篇文…...

为什么你的Function Calling在Qwen-3和Claude-4上表现差3倍?2026奇点大会现场压测对比结果首次公开

第一章&#xff1a;2026奇点智能技术大会&#xff1a;大模型FunctionCalling 2026奇点智能技术大会(https://ml-summit.org) Function Calling 已成为大模型与外部系统深度协同的核心范式&#xff0c;2026奇点智能技术大会将其列为关键议题&#xff0c;聚焦于语义理解精度、工…...

RelayModule:嵌入式继电器面向对象驱动库

1. RelayModule 库深度解析&#xff1a;面向嵌入式系统的数字继电器模块面向对象驱动设计继电器是嵌入式系统中实现强电控制与弱电隔离的核心执行器件&#xff0c;广泛应用于工业自动化、智能家居、电源管理及测试设备等场景。传统继电器驱动多采用裸机 GPIO 直接控制&#xff…...

《为什么只有镜像视界能做三维空间智能体?》——空间智能时代的技术门槛与体系壁垒解析

《为什么只有镜像视界能做三维空间智能体&#xff1f;》——空间智能时代的技术门槛与体系壁垒解析发布单位&#xff1a;镜像视界&#xff08;浙江&#xff09;科技有限公司一、引言&#xff1a;这是“能力问题”&#xff0c;不是“努力问题”在当前AI行业中&#xff0c;一个常…...

WiFiPixels:ESP32上轻量级Wi-Fi控制NeoPixel的固件框架

1. 项目概述WiFiPixels 是一个面向嵌入式 LED 控制场景的轻量级网络化固件框架&#xff0c;其核心设计目标是将 NeoPixel&#xff08;WS2812B 类型&#xff09;LED 阵列通过 Wi-Fi 接口暴露为可远程寻址、实时更新的像素资源。项目名称 “NeoPixel Wifi WifiPixels” 并非营销…...

编程基础(python)

由于我们的目标是学习人工智能&#xff0c;我们不需要特别精通这个编程。但掌握一些python必要的语法是十分必要的。我们没有必要只盯着语法&#xff0c;得将重点放在 数据处理 和 逻辑思维 上。毕竟&#xff0c;AI 的底层全是 矩陈运算和数据流转。我们得学会用代码把数学公式…...

从钓鱼邮件到Web后门:一次完整的攻击链流量分析复盘(基于BUUCTF案例)

从钓鱼邮件到Web后门&#xff1a;一次完整的攻击链流量分析实战 当企业内网突然出现异常流量时&#xff0c;安全团队往往需要像侦探一样从海量数据包中拼凑出攻击者的完整行动轨迹。这次我们以BUUCTF案例为蓝本&#xff0c;还原一个真实攻击场景&#xff1a;攻击者如何通过邮件…...

Alive2 如何对包含循环的 LLVM 优化进行有界验证

文本解读有界翻译验证&#xff1a;将循环展开指定次数&#xff08;例如 2 次&#xff09;&#xff0c;只检查在这些展开次数内可能触发的错误。如果错误需要更多迭代才能暴露&#xff0c;则可能漏报。这是一种工程权衡。循环分析&#xff1a;使用 Tarjan-Havlak 算法识别循环及…...

Galaxy平台在生物信息学工作流构建中的实战指南

1. Galaxy平台入门&#xff1a;零代码玩转生物信息学 第一次接触生物信息学分析的人&#xff0c;往往会被命令行和编程门槛劝退。我刚开始做基因组数据分析时&#xff0c;光是安装软件依赖就折腾了一周。直到发现了Galaxy这个神器——它把复杂的生信工具封装成可视化模块&#…...

使用OpenClaw的Skills对接本地系统勇

1. 流图&#xff1a;数据的河流 如果把传统的堆叠面积图想象成一块块整齐堆叠的积木&#xff0c;那么流图就像一条蜿蜒流淌的河流&#xff0c;河道的宽窄变化自然流畅&#xff0c;波峰波谷过渡平滑。 它特别适合展示多个类别数据随时间的变化趋势&#xff0c;尤其是当你想强调整…...

Spring IOC 源码学习 声明式事务的入口点氖

springboot自动配置 自动配置了大量组件&#xff0c;配置信息可以在application.properties文件中修改。 当添加了特定的Starter POM后&#xff0c;springboot会根据类路径上的jar包来自动配置bean&#xff08;比如&#xff1a;springboot发现类路径上的MyBatis相关类&#xff…...

Go Command 工作组成立:这几个用了十年的命令可能要被废!

大家好&#xff0c;我是Tony Bai。在这个技术浪潮汹涌的时代&#xff0c;Go 语言以其惊人的稳定性和向后兼容性著称。但稳定&#xff0c;并不代表停滞。就在最近&#xff0c;Go 核心团队内部悄然发生了一件大事&#xff1a;他们正式成立了一个全新的 “Go Command 工作组&#…...

从数据采集到回放验证:ADTF 适配 ROS 的 ADAS 测试实践俳

一、简化查询 1. 先看一下查询的例子 /// /// 账户获取服务 /// /// /// public class AccountGetService(AccountTable table, IShadowBuilder builder) {private readonly SqlSource _source new(builder.DataSource);private readonly IParamQuery _accountQuery build…...

避开这些坑,你的Multisim音频放大电路仿真才能一次成功

避开这些坑&#xff0c;你的Multisim音频放大电路仿真才能一次成功 在电子电路设计领域&#xff0c;音频放大电路仿真是许多工程师和爱好者的必经之路。然而&#xff0c;即使是最简单的三级放大电路&#xff0c;在Multisim仿真环境中也常常会遇到各种意想不到的问题。本文将聚焦…...

聊一聊 C# 中的闭包陷阱:foreach 循环的坑你还记得吗?藏

. GIF文件结构 相比于 WAV 文件的简单粗暴&#xff0c;GIF 的结构要精密得多&#xff0c;因为它天生是为了网络传输而设计的&#xff08;包含了压缩机制&#xff09;。 当我们用二进制视角观察 GIF 时&#xff0c;它是由一个个 数据块&#xff08;Block&#xff09; 组成的&…...

Android USB 驱动程序安装指南:从下载到调试的全流程解析

1. 为什么需要安装Android USB驱动程序&#xff1f; 当你第一次把Android手机通过USB线连接到电脑时&#xff0c;可能会遇到设备无法识别的情况。这时候系统通常会提示"驱动程序未安装"&#xff0c;导致你无法传输文件或者进行开发调试。我刚开始接触Android开发时就…...

Windows网络修复器

链接&#xff1a;https://pan.quark.cn/s/644d56bcec08Windows网络修复器是一款能够帮助用户恢复网络的工具&#xff0c;能够清理DNS本地缓存&#xff0c;并且能够帮助用户修复网络连接&#xff0c;让你能够更好的使用网络&#xff0c;有需要的用户不要错过了欢迎下载使用&…...

深度解析AI Agent的工具调用机制:注册发现、动态选择与执行链路设计

深度解析AI Agent的工具调用机制:注册发现、动态选择与执行链路设计 关键词 AI Agent, 工具调用, 注册发现, 动态选择, 执行链路, LLM, 函数调用 摘要 随着大型语言模型(LLM)的快速发展,AI Agent作为一种能够自主完成复杂任务的智能体正日益受到关注。本文将深度解析AI A…...

跨模态检索技术全景:从核心方法到前沿应用与挑战

1. 跨模态检索技术演进脉络 跨模态检索技术的发展可以追溯到早期的统计学习方法。最初的研究主要依赖**典型相关分析&#xff08;CCA&#xff09;**这类线性方法&#xff0c;通过寻找不同模态数据之间的线性关系来实现对齐。比如在2000年代初&#xff0c;研究者们用CCA处理文本…...

AI教育全面碾压传统教培:现状、挑战与转型路径

随着人工智能技术的爆发式发展&#xff0c;教育行业正经历一场前所未有的变革。AI教育培训正以惊人的速度重塑传统教育模式&#xff0c;从个性化学习到智能评估&#xff0c;从虚拟教师到自适应课程&#xff0c;AI正在全方位"碾压"传统教育培训体系。一、AI教育培训对…...

解决Pandas读取CSV时的ValueError:Usecols与列名不匹配的实战技巧

1. 为什么会出现Usecols与列名不匹配的错误 当你用Pandas读取CSV文件时&#xff0c;如果遇到"ValueError: Usecols do not match columns"这个错误&#xff0c;十有八九是因为列名匹配出了问题。我刚开始用Pandas时也经常踩这个坑&#xff0c;特别是当数据文件比较复…...

LumiPixel Canvas Quest多模态初探:结合文本描述生成角色设定图

LumiPixel Canvas Quest多模态初探&#xff1a;结合文本描述生成角色设定图 1. 多模态创作的新可能 最近试用LumiPixel Canvas Quest时&#xff0c;最让我惊喜的是它处理复杂文本描述的能力。不同于简单的文生图工具&#xff0c;这款模型真正展现了多模态理解的潜力——它能将…...

ESP32S2开发板变身USB网卡:从硬件连接到配网实战

1. 为什么需要把ESP32S2变成USB网卡&#xff1f; 最近在折腾智能家居项目时&#xff0c;发现很多嵌入式设备需要联网功能&#xff0c;但传统WiFi模块配置复杂且稳定性一般。偶然发现ESP32S2开发板居然能通过USB接口模拟网卡功能&#xff0c;实测下来简直打开了新世界的大门——…...

避坑指南:为MATLAB 2023b配置CCS12.2+C2000ware 4.03黄金开发环境

MATLAB 2023b与CCS12.2C2000ware 4.03开发环境配置全攻略 当工程师们开始搭建基于TI C2000和MATLAB的模型化设计工作流时&#xff0c;环境配置往往是第一个需要跨越的门槛。特别是对于MATLAB 2023b这样的新版本&#xff0c;选择与之匹配的工具链版本至关重要。本文将深入探讨如…...

Switch_lib:面向继电器控制的轻量级数字引脚时序管理库

1. Switch_lib 库深度解析&#xff1a;面向继电器控制的数字引脚时序管理方案在工业控制、智能家居和嵌入式自动化系统中&#xff0c;对数字输出引脚进行精确、可编程的时序控制是基础而关键的需求。典型场景包括&#xff1a;继电器驱动&#xff08;如水泵启停、照明定时、加热…...

告别原生JDBC的繁琐:用DBUtils的QueryRunner和BeanHandler重构你的Servlet登录逻辑

从JDBC泥潭到DBUtils优雅实践&#xff1a;Servlet登录逻辑的重构艺术 登录功能作为Web应用的基石&#xff0c;其代码质量直接影响系统的安全性和可维护性。传统ServletJDBC方案虽然直接&#xff0c;但存在大量重复代码和资源管理隐患。让我们看看如何用Apache Commons DBUtils这…...

## 015、AutoSAR CP实战:配置存储栈(NvM,Fee,Ea)

深夜的产线问题 产线突然报过来一个诡异问题:车辆下电后重新上电,里程表数据偶尔会跳回三天前的数值。抓了三天Log,发现每当Flash擦除时电压有轻微波动,问题就复现。这直接把我们引向了存储栈的配置——NvM、Fee、Ea这套组合拳,任何一个参数配歪了,都是量产时的定时炸弹…...

PingCraft:从需求文档到可追踪工作项的 Agent 实践之路段

整体排查思路 我们的目标是验证以下三个环节是否正常&#xff1a; 登录成功时&#xff1a;服务器是否正确生成了Session并返回了包含正确 JSESSIONID的Cookie给浏览器。 浏览器端&#xff1a;浏览器是否成功接收并存储了该Cookie。 后续请求&#xff1a;浏览器在执行查询等操作…...