自定义序列化数据类型
目录
- 1. WritableComparable
- 1.1 Writable
- 1.2 Comparable
- 1.3 IntWritable
- 2. 自定义序列化数据类型RectangleWritable
- 3. 矩形面积计算
- 3.1 Map
- 3.2 Reduce
- 4. 代码和结果
- 4.1 pom.xml中依赖配置
- 4.2 工具类util
- 4.3 矩形面积计算
- 4.4 结果
- 参考
本文引用的Apache Hadoop源代码基于Apache许可证 2.0,详情请参阅 Apache许可证2.0。
1. WritableComparable
自定义序列化数据类型需要继承org.apache.hadoop.io.WritableComparable,该类的源代码如下。从源代码中可以看出WritableComparable的父类有两个:org.apache.hadoop.io.Writable和java.lang.Comparable。
/*** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements. See the NOTICE file* distributed with this work for additional information* regarding copyright ownership. The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License. You may obtain a copy of the License at** http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/package org.apache.hadoop.io;import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;/*** A {@link Writable} which is also {@link Comparable}. ** <p><code>WritableComparable</code>s can be compared to each other, typically * via <code>Comparator</code>s. Any type which is to be used as a * <code>key</code> in the Hadoop Map-Reduce framework should implement this* interface.</p>** <p>Note that <code>hashCode()</code> is frequently used in Hadoop to partition* keys. It's important that your implementation of hashCode() returns the same * result across different instances of the JVM. Note also that the default * <code>hashCode()</code> implementation in <code>Object</code> does <b>not</b>* satisfy this property.</p>* * <p>Example:</p>* <blockquote><pre>* public class MyWritableComparable implements* WritableComparable{@literal <MyWritableComparable>} {* // Some data* private int counter;* private long timestamp;* * public void write(DataOutput out) throws IOException {* out.writeInt(counter);* out.writeLong(timestamp);* }* * public void readFields(DataInput in) throws IOException {* counter = in.readInt();* timestamp = in.readLong();* }* * public int compareTo(MyWritableComparable o) {* int thisValue = this.value;* int thatValue = o.value;* return (thisValue < thatValue ? -1 : (thisValue==thatValue ? 0 : 1));* }** public int hashCode() {* final int prime = 31;* int result = 1;* result = prime * result + counter;* result = prime * result + (int) (timestamp ^ (timestamp >>> 32));* return result* }* }* </pre></blockquote>*/
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface WritableComparable<T> extends Writable, Comparable<T> {
}
1.1 Writable
Writable中主要有两个方法:void write(DataOutput out)和void readFields(DataInput in),其中readFields负责序列化读入,而write负责序列化写出。由于Writable是WritableComparable的父类,因此自定义序列化数据类型必须实现这两个方法。
/*** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements. See the NOTICE file* distributed with this work for additional information* regarding copyright ownership. The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License. You may obtain a copy of the License at** http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/package org.apache.hadoop.io;import java.io.DataOutput;
import java.io.DataInput;
import java.io.IOException;import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;/*** A serializable object which implements a simple, efficient, serialization * protocol, based on {@link DataInput} and {@link DataOutput}.** <p>Any <code>key</code> or <code>value</code> type in the Hadoop Map-Reduce* framework implements this interface.</p>* * <p>Implementations typically implement a static <code>read(DataInput)</code>* method which constructs a new instance, calls {@link #readFields(DataInput)} * and returns the instance.</p>* * <p>Example:</p>* <blockquote><pre>* public class MyWritable implements Writable {* // Some data* private int counter;* private long timestamp;** // Default constructor to allow (de)serialization* MyWritable() { }** public void write(DataOutput out) throws IOException {* out.writeInt(counter);* out.writeLong(timestamp);* }** public void readFields(DataInput in) throws IOException {* counter = in.readInt();* timestamp = in.readLong();* }** public static MyWritable read(DataInput in) throws IOException {* MyWritable w = new MyWritable();* w.readFields(in);* return w;* }* }* </pre></blockquote>*/
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface Writable {/** * Serialize the fields of this object to <code>out</code>.* * @param out <code>DataOuput</code> to serialize this object into.* @throws IOException any other problem for write.*/void write(DataOutput out) throws IOException;/** * Deserialize the fields of this object from <code>in</code>. * * <p>For efficiency, implementations should attempt to re-use storage in the * existing object where possible.</p>* * @param in <code>DataInput</code> to deseriablize this object from.* @throws IOException any other problem for readFields.*/void readFields(DataInput in) throws IOException;
}
1.2 Comparable
Comparable是WritableComparable的另一个父类,它的源代码如下。它只有一个方法int compareTo(T var1),该方法负责比较两个T类型的大小(一般在继承Comparable的子类中)。自定义序列化数据类型也必须实现这个方法。
// Source code is decompiled from a .class file using FernFlower decompiler.
package java.lang;public interface Comparable<T> {int compareTo(T var1);
}
1.3 IntWritable
IntWritable是官方定义的序列化数据类型,自定义序列化数据类型时可以参考它的源代码,其中除了上面提到的write,readFields和compareTo方法之外,还建议实现toString方法(public String toString())。
/*** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements. See the NOTICE file* distributed with this work for additional information* regarding copyright ownership. The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License. You may obtain a copy of the License at** http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/package org.apache.hadoop.io;import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;/** A WritableComparable for ints. */
@InterfaceAudience.Public
@InterfaceStability.Stable
public class IntWritable implements WritableComparable<IntWritable> {private int value;public IntWritable() {}public IntWritable(int value) { set(value); }/*** Set the value of this IntWritable.* @param value input value.*/public void set(int value) { this.value = value; }/*** Return the value of this IntWritable.* @return value of this IntWritable.*/public int get() { return value; }@Overridepublic void readFields(DataInput in) throws IOException {value = in.readInt();}@Overridepublic void write(DataOutput out) throws IOException {out.writeInt(value);}/** Returns true iff <code>o</code> is a IntWritable with the same value. */@Overridepublic boolean equals(Object o) {if (!(o instanceof IntWritable))return false;IntWritable other = (IntWritable)o;return this.value == other.value;}@Overridepublic int hashCode() {return value;}/** Compares two IntWritables. */@Overridepublic int compareTo(IntWritable o) {int thisValue = this.value;int thatValue = o.value;return (thisValue<thatValue ? -1 : (thisValue==thatValue ? 0 : 1));}@Overridepublic String toString() {return Integer.toString(value);}/** A Comparator optimized for IntWritable. */ public static class Comparator extends WritableComparator {public Comparator() {super(IntWritable.class);}@Overridepublic int compare(byte[] b1, int s1, int l1,byte[] b2, int s2, int l2) {int thisValue = readInt(b1, s1);int thatValue = readInt(b2, s2);return (thisValue<thatValue ? -1 : (thisValue==thatValue ? 0 : 1));}}static { // register this comparatorWritableComparator.define(IntWritable.class, new Comparator());}
}
2. 自定义序列化数据类型RectangleWritable
自定义序列化数据类型时候,如果额外实现了public int hashCode()、public boolean equals(Object o)和public String toString(),则该自定义序列化数据类型可以作为Mapper输出的键或值、Reducer输入和输出的键或值(写入结果时会调用toString方法来写入该类型),而且还可以使用MapReduce自带的哈希分区。此外,实现hashCode方法时,要产生一个基于该类型内属性值的哈希函数;而实现compareTo方法,要避免该类型的不同实例实际不同而被比较后得到的结果是相同的影响。
下面是RectangleWritable的代码,该类的功能是处理矩形的长和宽。同时我认为长为6,宽为3和长为3,宽为6是两种不同的矩形,基于这个想法实现了compareTo方法。
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.Objects;import org.apache.hadoop.io.WritableComparable;public class RectangleWritable implements WritableComparable<RectangleWritable> {private int length, width;public int getLength() {return length;}public void setLength(int length) {this.length = length;}public int getWidth() {return width;}public void setWidth(int width) {this.width = width;}public RectangleWritable() {super();}public RectangleWritable(int length, int width) {super();setLength(length);setWidth(width);}@Overridepublic String toString() {return String.format("%d\t%d", getLength(), getWidth());}@Overridepublic void write(DataOutput out) throws IOException {out.writeInt(getLength());out.writeInt(getWidth());}@Overridepublic int hashCode() {return Objects.hash(getLength(), getWidth());}@Overridepublic boolean equals(Object o) {if (this == o)return true;if (!(o instanceof RectangleWritable))return false;RectangleWritable other = (RectangleWritable) o;return other.getLength() == getLength() && other.getWidth() == getWidth();}@Overridepublic void readFields(DataInput in) throws IOException {this.length = in.readInt();this.width = in.readInt();}@Overridepublic int compareTo(RectangleWritable o) {int res = Integer.compare(getLength(), o.getLength());return res == 0 ? Integer.compare(getWidth(), o.getWidth()) : res;}
}
3. 矩形面积计算
输入文本如下。
9 9
3 27
7 8
1 1
3 6
6 3
3.1 Map

3.2 Reduce

4. 代码和结果
4.1 pom.xml中依赖配置
<dependencies><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.11</version><scope>test</scope></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>3.3.6</version><exclusions><exclusion><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId></exclusion></exclusions></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-core</artifactId><version>3.3.6</version><type>pom</type></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-jobclient</artifactId><version>3.3.6</version></dependency></dependencies>
4.2 工具类util
import java.net.URI;
import java.util.regex.Matcher;
import java.util.regex.Pattern;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;public class util {public static FileSystem getFileSystem(String uri, Configuration conf) throws Exception {URI add = new URI(uri);return FileSystem.get(add, conf);}public static void removeALL(String uri, Configuration conf, String path) throws Exception {FileSystem fs = getFileSystem(uri, conf);if (fs.exists(new Path(path))) {boolean isDeleted = fs.delete(new Path(path), true);System.out.println("Delete Output Folder? " + isDeleted);}}public static void showResult(String uri, Configuration conf, String path) throws Exception {FileSystem fs = getFileSystem(uri, conf);String regex = "part-r-";Pattern pattern = Pattern.compile(regex);if (fs.exists(new Path(path))) {FileStatus[] files = fs.listStatus(new Path(path));for (FileStatus file : files) {Matcher matcher = pattern.matcher(file.getPath().toString());if (matcher.find()) {System.out.println(file.getPath() + ":");FSDataInputStream openStream = fs.open(file.getPath());IOUtils.copyBytes(openStream, System.out, 1024);openStream.close();}}}}
}
4.3 矩形面积计算
import java.io.IOException;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class App {public static class MyMapper extends Mapper<LongWritable, Text, RectangleWritable, NullWritable> {public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] splitStr = value.toString().split(" ");RectangleWritable keyOut = new RectangleWritable(Integer.parseInt(splitStr[0]), Integer.parseInt(splitStr[1]));context.write(keyOut, NullWritable.get());}}public static class MyReducer extends Reducer<RectangleWritable, NullWritable, RectangleWritable, IntWritable> {public void reduce(RectangleWritable key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {IntWritable area = new IntWritable(key.getLength() * key.getWidth());context.write(key, area);}}public static void main(String[] args) throws Exception { Configuration conf = new Configuration();String[] myArgs = {"file:///home/developer/CodeArtsProjects/CalRectangleArea/rectangle.txt","hdfs://localhost:9000/user/developer/CalRectangleArea/output"};util.removeALL("hdfs://localhost:9000", conf, myArgs[myArgs.length - 1]);Job job = Job.getInstance(conf, "CalRectangleArea");job.setJarByClass(App.class);job.setMapperClass(MyMapper.class);job.setReducerClass(MyReducer.class);job.setMapOutputKeyClass(RectangleWritable.class);job.setMapOutputValueClass(NullWritable.class);job.setOutputKeyClass(RectangleWritable.class);job.setOutputValueClass(IntWritable.class);for (int i = 0; i < myArgs.length - 1; i++) {FileInputFormat.addInputPath(job, new Path(myArgs[i]));}FileOutputFormat.setOutputPath(job, new Path(myArgs[myArgs.length - 1]));int res = job.waitForCompletion(true) ? 0 : 1;if (res == 0) {System.out.println("计算结果为:");util.showResult("hdfs://localhost:9000", conf, myArgs[myArgs.length - 1]);}System.exit(res);}
}
4.4 结果

参考
吴章勇 杨强著 大数据Hadoop3.X分布式处理实战
相关文章:
自定义序列化数据类型
目录 1. WritableComparable1.1 Writable1.2 Comparable1.3 IntWritable 2. 自定义序列化数据类型RectangleWritable3. 矩形面积计算3.1 Map3.2 Reduce 4. 代码和结果4.1 pom.xml中依赖配置4.2 工具类util4.3 矩形面积计算4.4 结果 参考 本文引用的Apache Hadoop源代码基于Apac…...
【Linux网络编程】:URL(encode),HTTP协议,telnet工具
🎁个人主页:我们的五年 🔍系列专栏:Linux网络编程 🌷追光的人,终会万丈光芒 🎉欢迎大家点赞👍评论📝收藏⭐文章 Linux网络编程笔记: https://mp.csdn…...
C语言基础系列【3】VSCode使用
前面我们提到过VSCode有多么的好用,本文主要介绍如何使用VSCode编译运行C语言代码。 安装 首先去官网(https://code.visualstudio.com/)下载安装包,点击Download for Windows 获取安装包后,一路点击Next就可以。 配…...
学前端框架之前,你需要先理解 MVC
MVC 软件架构设计模式鼎鼎大名,相信你已经听说过了,但你确定自己已经完全理解到 MVC 的精髓了吗? 如果你是新同学,没听过 MVC,那可以到网上搜一些文章来看看,不过你要有心理准备,那些文章大多都…...
Mysql:数据库
Mysql 一、数据库概念?二、MySQL架构三、SQL语句分类四、数据库操作4.1 数据库创建4.2 数据库字符集和校验规则4.3 数据库修改4.4 数据库删除4.4 数据库备份和恢复其他 五、表操作5.1 创建表5.2 修改表5.3 删除表 六、表的增删改查6.1 Create(创建):数据新增1&#…...
python的函数介绍
一.定义和调用函数 1.定义函数 在 Python 中,使用 def 关键字来定义一个函数。函数可以包含参数,也可以包含返回值 基本语法 def function_name(parameters):"""docstring"""# Function bodyreturn resultdef greet(n…...
要完成使用MLflow比较模型运行、选择模型并将其部署到REST API的教程
要完成使用MLflow比较模型运行、选择模型并将其部署到REST API的教程,请按照以下有序步骤操作: 设置环境 导出MLflow跟踪URI:设置环境变量以指向您的MLflow跟踪服务。export MLFLOW_TRACKING_URIyour-organizations-MLflow-server-url 加载数…...
Windows Docker笔记-简介摘录
Docker是一个开源的容器化平台,可以帮助开发人员将应用程序与其依赖项打包在一个独立的容器中,然后在任何安装的Docker的环境中快速、可靠地运行。 几个基本概念和优势: 容器:容器是一个轻量级、独立的运行环境,包含了…...
MVC 文件夹:架构之美与实际应用
MVC 文件夹:架构之美与实际应用 引言 MVC(Model-View-Controller)是一种设计模式,它将应用程序分为三个核心组件:模型(Model)、视图(View)和控制器(Controller)。这种架构模式不仅提高了代码的可维护性和可扩展性,而且使得开发流程更加清晰。本文将深入探讨MVC文…...
AI透明化与全球政治格局的发展:如何避免AI被人为操控
在现代社会,人工智能(AI)已经逐渐渗透到我们的日常生活中,尤其是在社交平台和信息传播领域。虽然AI可以极大地推动社会发展,但也潜藏着被恶意操控的风险。最令人担忧的是,某些势力可能通过操控AI来操控公众…...
ubuntu 网络管理--wpa_supplicant、udhcpc
ubuntu 网络管理--wpa_supplicant 1 介绍wpa_supplicant 无线认证wpa_passphrase 配置工具 NetworkManager 网络管理udhcpc 与 dhclient对比dhclient概述主要功能 udhcpc概述主要功能 2 联系依赖关系配置文件 3 区别4 如何选择5 示例使用 wpa_supplicant 手动连接无线网络使用 …...
熟练掌握Http协议
目录 基本概念请求数据Get请求方式和Post请求方式 响应数据响应状态码 基本概念 Http协议全称超文本传输协议(HyperText Transfer Protocol),是网络通信中应用层的协议,规定了浏览器和web服务器数据传输的格式和规则 Http应用层协议具有以下特点&#…...
C++的 I/O 流
本文把复杂的基类和派生类的作用和关系捋出来,具体的接口请参考相关文档 C的 I/O 流相关的类,继承关系如下图所示 https://zh.cppreference.com/w/cpp/io I / O 的概念:内存和外设进行数据交互称为 I / O ,例如:把数…...
寒假刷题Day22
一、2570. 合并两个二维数组 - 求和法 class Solution { public:vector<vector<int>> mergeArrays(vector<vector<int>>& nums1, vector<vector<int>>& nums2) {int i 0, j 0, n1 nums1.size(), n2 nums2.size();int tmp 0;…...
【PDF多区域识别】如何批量PDF指定多个区域识别改名,基于Windows自带的UWP的文字识别实现方案
海关在对进口货物进行查验时,需要核对报关单上的各项信息。对报关单 PDF 批量指定区域识别改名后,海关工作人员可以更高效地从文件名中获取关键信息,如货物来源地、申报价值等。例如文件名 “[原产国]_[申报价值].pdf”,有助于海关快速筛选重点查验对象,提高查验效率和监管…...
【大数据技术】本机PyCharm远程连接虚拟机Python
本机PyCharm远程连接虚拟机Python 注意:本文需要使用PyCharm专业版。 pycharm-professional-2024.1.4VMware Workstation Pro 16CentOS-Stream-10-latest-x86_64-dvd1.iso写在前面 本文主要介绍如何使用本地PyCharm远程连接虚拟机,运行Python脚本,提高编程效率。 注意: …...
Windows图形界面(GUI)-QT-C/C++ - Qt Combo Box
公开视频 -> 链接点击跳转公开课程博客首页 -> 链接点击跳转博客主页 目录 一、概述 1.1 基本概念 1.2 应用场景对比 二、核心属性详解 2.1 行为控制 2.2 显示配置 三、数据操作与访问 3.1 基础数据管理 3.2 高级数据访问 四、用户交互处理 4.1 信号处…...
数字化转型:概念性名词浅谈(第四讲)
大家好,本篇文章是在新年之际写的,所以在这里先给大家拜个年。 今天要介绍的名词为ETL: ETL,是英文Extract-Transform-Load的缩写,用来描述将数据从来源端经过抽取(extract)、转换(transfor…...
标题:市场波动中的价格趋势分析与策略优化
在市场投资中,价格的波动不仅是常态,更是影响投资决策的关键因素。市场价格的变化受多种因素影响,包括供需关系、政策调整、国际市场动态等。如何理解价格走势,并在其中寻找合适的交易机会,是投资者需要长期研究的课题…...
【HarmonyOS之旅】基于ArkTS开发(三) -> 兼容JS的类Web开发(四) -> 常见组件(一)
目录 1 -> List 1.1 -> 创建List组件 1.2 -> 添加滚动条 1.3 -> 添加侧边索引栏 1.4 -> 实现列表折叠和展开 1.5 -> 场景示例 2 -> dialog 2.1 -> 创建Dialog组件 2.2 -> 设置弹窗响应 2.3 -> 场景示例 3 -> form 3.1 -> 创建…...
Linux:文件系统(软硬链接)
目录 inode ext2文件系统 Block Group 超级块(Super Block) GDT(Group Descriptor Table) 块位图(Block Bitmap) inode位图(Inode Bitmap) i节点表(inode Tabl…...
深度学习系列--01.入门
一.深度学习概念 深度学习(Deep Learning)是机器学习的分支,是指使用多层的神经网络进行机器学习的一种手法抖音百科。它学习样本数据的内在规律和表示层次,最终目标是让机器能够像人一样具有分析学习能力,能够识别文字…...
如何抓取酒店列表: 揭开秘密
搜索酒店列表是一种强大的工具,可以从各种在线资源中收集有关住宿、价格和可用性的综合数据。无论您是要比较价格、分析市场趋势,还是要创建个性化的旅行计划,搜索都能让您有效地汇编所需的信息。在本文中,我们将介绍如何搜索酒店…...
深度剖析 C++17 中的 std::byte:解锁字节级编程新境界
文章目录 一、引入背景二、基本定义三、特性详解不可隐式转换为整型显式转换为unsigned char位运算支持字面量支持四、使用场景内存操作数据序列化与反序列化网络通信文件读写操作五、与其他数据类型的交互与字符类型的交互与整数类型的交互与指针类型的交互六、注意事项避免混…...
【多线程】线程池核心数到底如何配置?
🥰🥰🥰来都来了,不妨点个关注叭! 👉博客主页:欢迎各位大佬!👈 文章目录 1. 前置回顾2. 动态线程池2.1 JMX 的介绍2.1.1 MBeans 介绍 2.2 使用 JMX jconsole 实现动态修改线程池2.2.…...
三维空间全局光照 | 及各种扫盲
Lecture 6 SH for diffuse transport Lecture 7关于 SH for glossy transport 三维空间全局光照 diffuse case和glossy case的区别 在Lambertian模型中,BRDF是一个常数 diffuse case 跟outgoing point无关 glossy case 跟outgoing point有关 (Gloss…...
通过C/C++编程语言实现“数据结构”课程中的链表
引言 链表(Linked List)是数据结构中最基础且最重要的线性存储结构之一。与数组的连续内存分配不同,链表通过指针将分散的内存块串联起来,具有动态扩展和高效插入/删除的特性。本文将以C/C++语言为例,从底层原理到代码实现,手把手教你构建完整的链表结构,并深入探讨其应…...
Polardb三节点集群部署安装--附虚拟机
1. 架构 PolarDB-X 采用 Shared-nothing 与存储计算分离架构进行设计,系统由4个核心组件组成。 计算节点(CN, Compute Node) 计算节点是系统的入口,采用无状态设计,包括 SQL 解析器、优化器、执行器等模块。负责数据…...
java s7接收Byte字节,接收word转16位二进制
1图: 2.图: try {List list getNameList();//接收base64S7Connector s7Connector S7ConnectorFactory.buildTCPConnector().withHost("192.168.46.52").withPort(102).withTimeout(1000) //连接超时时间.withRack(0).withSlot(3).build()…...
挑战项目 --- 微服务编程测评系统(在线OJ系统)
一、前言 1.为什么要做项目 面试官要问项目,考察你到底是理论派还是实战派? 1.希望从你的项目中看到你的真实能力和对知识的灵活运用。 2.展示你在面对问题和需求时的思考方式及解决问题的能力。 3.面试官会就你项目提出一些问题,或扩展需求…...
