我倒没觉得多怪——边上几个“上班族”、“学生党”大多熟悉面孔,其余的多半是今天特地出门赶菜场、赶超市、赶医院的阿姨爷叔。
虽说已经过了立秋,上海这天气依旧炎热,早晨的太阳仅用三成功力,已将马路烤得热气蒸腾,把尽头的筒子楼熏得变了形。公交车里则是上海特有的湿热,外头的热浪从窗口钻进来,混着车里的潮气与汗味儿,让人有些喘不过气来。
“司机空调开了伐啦?”几乎是意料之中,身边的阿姨已经耐不住性子,声音穿过十数人直逼驾驶室。
“啊?”
“空调!”
“开了呀!呐窗子开了组撒啦,关特依呀,开了窗空调难能打得起来啦!”
“窗外头比里厢风凉,搞撒么事啊!”另几个阿姨爷叔也开始议论纷纷,试图给司机施加压力。
“个么呐自家看呀,空调肯定开了呀,呐开窗设宜么就开窗好伐啦?”
阿姨显然对司机的解释并不满意,但无奈中间隔了一车厢的人,没法到驾驶室当面对质,嘟囔了几句就悻悻地掏出手机刷起了短视频。
车程几乎过半,路过好几个老小区,车上的老阿姨老师傅愈渐多了起来,偶尔见着有腿脚不便的,心里总想着起来让个座。不过我坐在靠里的位置。眼朝身边的阿姨一瞥,见她依旧紧锁眉头,便也不好叫她起来,只好放下这个念头,把头埋进手机里。好在他们大多过了几站便下车了,站这么一会估计尚能够承受。
我还沉浸在自我开导当中,突然听得车前头传来一声炸响,“撒宁帮这个老师傅让个座!”猛地一抬头,看见驾驶室门大开着,司机师傅一手叉腰一手扶门,正在朝我这边喊过来。
我过了约莫半分钟才回过神来,靠近后门的地方站着一位身形佝偻的老者,六、七十岁的样子,褐色的灯芯绒上衣翻毛得厉害,黑色的粗麻长裤好几处已经褪成了白色,标签都未及撕去的金丝眼镜背后是一双空洞的眼睛。
没等我起身,另一个年轻人已经为他让了座。司机师傅多少带着些得意,“霞霞个位年轻人!”
但老者似乎并不领情,仿佛什么都没听到似的依旧紧紧攥着栏杆,双眼无神而笃定地望着窗外。
“跟你说话呢”,我身边的阿姨用上海口音的普通话尖声说道,几乎是要把刚才因为空调憋的气一股脑撒在老者身上,“司机叫这个年轻人给你让座!”
老者抬抬眉毛,似乎反应了过来;我也明白了——老者听不懂司机说的上海话。
他挪着步子走向年轻人刚刚让出的座位,嘴里嘟哝了一句:“原来是让我坐,我还以为是怀疑我呢。”声音虽小,在公交车这铁皮盒子里听起来却格外的响亮。起初大家只是窸窸窣窣地议论,到这会仿佛是能量聚集爆发了似的,统统大笑了起来。
身边的上海阿姨穷追不舍,“人家司机是对你好,叫这个年轻人给你让座,你倒还以为是怀疑你!”
“谢谢啊”,不知是不好意思,还是只是顺水推舟,老者从他仅剩两颗大牙的嘴里好不容易憋出这么三个字来。
阿姨似乎察觉到了老者的不以为然,大声回应道,“对呀,这就是上海呀!”
司机师傅又经过阿姨的翻译问了问老者在哪里下车,嘱咐他到站了慢点走,这才满意地坐回他的宝座,继续朝前开去。
阿姨这句话真是激起了我全身的鸡皮疙瘩,这二十余年里,从未有人这么坚定地说出过这样的话,更不用说让我真正感觉到上海是这样一个包容、博爱、海纳百川的城市。是啊,阿姨说得对,这就是上海嘛!
我正回味着上海的种种美好,司机那头又传来喧闹声——原来是有位年轻女士没决定好要不要上车,在门口看地图,耽误了司机关门开车。
那位女士连连道歉,原本善解人意的司机师傅这回竟丝毫不买账,开足火力厉声批评。“你站在这我怎么开车?”“你让全车的人等你?”“你就不能提前看好地图?”女士根本不敢支声,慌里慌张地刷了卡就到后面来站着了。不料司机师傅依旧穷追不舍,质问声混着热浪扑过来。
我身边的阿姨又坐不住了,仿佛她才是这班公交的乘务员,“司机侬开呀,人家对伐起啊岗好了呀!一车子的人都交给你了,专心开车!”阿姨话音落下,车上众人也拧上了发条一般,东一句西一句地小声附和。
在大家的努力下,车总算开了。
车离终点越来越近,我却有些恍惚,总觉得我才刚刚上车。
临到站,我背上包预备下车。意外的是,身边的阿姨早已察觉,客气地收起手机,起身为我让道。我几乎有些惶恐,站起来就闷头往后门走。
车门外是杨浦随处可见的梧桐,灼热的空气模糊着视线,像每个上海的夏天一样。我竟觉得有些陌生。
到站了,我下了车,回过身看着车缓缓驶离,却又觉得自己好像从来没下过车。
也许这才是上海,有包容、温情,也有小鸡肚肠、表面和气。车上车下,没有哪一件事是全部的上海,却每一件事都是上海。
]]>I didn’t think much of it - most of the people around me were familiar faces, mostly office workers and students, with others being aunties and uncles out for grocery shopping, to the market, or to the hospital.
The Beginning of Autumn has passed, yet Shanghai’s weather remained hot. The morning sun only used a fraction of its strength, already making the road sizzle and causing the distant high-rise buildings to distort from the heat. Inside the bus, there was the unique humid heat of Shanghai, with the hot air outside seeping in through the windows, mixed with the dampness and sweat inside the bus, making it hard to catch one’s breath.
“Driver, why isn’t the air conditioning on?” Almost as expected, the auntie beside me couldn’t hold back and her voice pierced through the others, reaching the driver.
“Huh?”
“The air conditioning!”
“It’s on! If you open the windows, it won’t work well, just close the window, open window air conditioning can’t function properly!”
“Why is it cooler outside the window than inside? What’s the deal?” Other aunties and uncles started to comment, trying to put pressure on the driver.
“You see it yourself, the air conditioning is definitely on, why open the windows if it’s better to keep them closed?”
The auntie was clearly not satisfied with the driver’s explanation, but with people in between, she couldn’t confront him directly in the driver’s seat. She muttered a few words and took out her phone to watch short videos.
As the journey continued and we passed several older neighborhoods, more elderly people got on the bus. Occasionally, when I saw someone less mobile, I would think of giving up my seat. However, I was sitting closer to the inner part. Glancing at the auntie beside me, who still had a furrowed brow, I didn’t want to ask her to stand up. I just put that thought aside and buried my head in my phone. Luckily, most of them got off after a few stops, so I could endure standing for a while.
Lost in my thoughts, I suddenly heard a loud noise from the front of the bus - it turned out that the driver was asking the elderly man standing near the back door to take a seat!
After about half a minute, I realized what was happening. Standing at the back was an elderly man, in his sixties or seventies, with rumpled brown corduroy jacket and black coarse linen pants that had turned white in several places, with a pair of hollow eyes behind his wire-framed glasses.
Before I could get up, another young man had already offered his seat to him. The driver seemed somewhat pleased, “Well done, young man!”
But the elderly man seemed indifferent, as if he had not heard anything, still holding onto the railing tightly, his eyes empty and unwavering as he gazed outside the window.
“He’s talking to you,” the auntie beside me said loudly in Shanghainese Mandarin, almost venting the frustration she had due to the air conditioning issue onto the elderly man, “the driver asked this young man to let you have a seat!”
The elderly man raised his eyebrows, seemingly understanding now. I also understood - he couldn’t understand the Shanghainese Mandarin spoken by the driver.
He shuffled towards the seat that the young man had just vacated and muttered, “So he wanted me to sit, I thought he was suspecting me.” His voice, although quiet, sounded exceptionally loud in the metal box of the bus. Initially, people were whispering, but now it seemed like an eruption of energy, and everyone burst into laughter.
The Shanghai auntie beside me persisted, “The driver is being nice to you, asking this young man to give up his seat for you, and you thought he was suspecting you!”
“Thank you,” whether out of embarrassment or just going with the flow, the elderly man managed to squeeze out these three words from his mouth, with only two teeth remaining.
The auntie seemed to notice the elderly man’s reluctance and loudly responded, “Yes, this is Shanghai!”
The driver, through the auntie’s translation, asked the elderly man where he needed to get off and instructed him to walk slowly when he reached his stop. Satisfied, the driver returned to his seat and continued driving forward.
The auntie’s words sent chills down my spine. In over twenty years, no one had been so resolute saying such words, let alone making me truly feel that Shanghai is such a tolerant, loving, and inclusive city. Yes, the auntie was right, this is Shanghai!
As I was savoring the beauty of Shanghai, there was suddenly a commotion from the driver’s end - it turned out a young lady was unsure whether to board the bus, standing at the door looking at her map, delaying the driver from closing the door and moving.
The lady repeatedly apologized, but the usually understanding driver didn’t take it lightly, criticizing her sternly. “How can I drive with you standing here?” “You want the whole bus to wait for you?” “Can’t you check the map before?” The lady didn’t dare to utter a word, swiped her card in a hurry, and went to stand at the back. However, the driver continued to chase after her, his questioning mixed with the heat of the moment.
The auntie beside me couldn’t sit still, as if she was the bus attendant, “Driver, start driving, can’t you be more patient with her? You’re responsible for everyone on this bus, focus on driving!” As the auntie spoke, everyone else on the bus started to murmur, contributing their opinions in hushed tones.
With everyone’s effort, the bus finally started moving.
As the bus inched closer to the final stop, I felt somewhat dazed, as if I had just gotten on the bus.
As we approached the stop, I prepared to get off with my bag on my back. Surprisingly, the auntie next to me had already noticed and politely put away her phone, standing up to make way for me. I was almost anxious, standing up and heading towards the back door.
Outside the door were the ubiquitous plane trees of Yangpu, the scorching air blurring my vision, like every summer in Shanghai. I felt a bit unfamiliar with it all.
As I got off the bus, I turned around to watch the bus slowly depart, feeling as if I had never gotten off the bus.
Perhaps, this is Shanghai - it embraces warmth, but also has its quirks and faux niceties. Getting on and getting off the bus, no single thing encompasses the entirety of Shanghai, but each thing is Shanghai.
]]>加州淘金错过了,比特币错过了,大语言模型仿佛又要错过。不过也无妨,错过的只是这个平行宇宙。
不晚呼——心里默念,总有人会用得上罢。
网页端由于Cloudflare管的比较严,估摸着反向代理有封号风险,于是转而反向代理OpenAI API。
代理起来相当容易,首先安装Caddy 2。
编写/etc/Caddyfile
:
<host>:<port> {
reverse_proxy https://api.openai.com {
header_up Host api.openai.com
}
}
其中,<host>
应当替换为代理服务器的IP或者域名,<port>
应当替换为监听的端口。
值得注意的是,OpenAI API的Cloudflare防御机制会检测请求中的Host值,以判断请求是否确实发向OpenAI。如果不为api.openai.com
,将返回403 Forbidden错误。
因此必须设置header_up Host api.openai.com
,将请求头中的Host修改为对应值。
运行sudo systemctl start caddy
开启Caddy服务器,可以使用curl
测试代理服务器是否工作正常:
$ curl curl https://<host>:<port>/v1/models
{
"error": {
"message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accesing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
返回值提示需要提供API Key,表示已经配置成功。
如果说反向代理WOD的难度是13,反向代理ChatGPT的难度只能勉强打4分!
如果喜欢使用JSON配置Caddy,也可以参考以下配置:
{
"admin": {
"disabled": true
},
"logging": {
"logs": {
"log0": {
"writer": {
"output": "stdout"
},
"encoder": {
"format": "console"
},
"level": "WARN"
}
}
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [":<port>"],
"routes": [
{
"match": [{ "host": ["<host>"] }],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "reverse_proxy",
"headers": {
"request": {
"set": {
"Host": ["api.openai.com"]
}
}
},
"transport": {
"protocol": "http",
"tls": {}
},
"upstreams": [{ "dial": "api.openai.com:443" }]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
回到开头的问题,究竟为什么(请原谅,我总是一个喜欢问为什么的人)会错过?
似乎有两个主要的方面,一个是开始的动力,一个是持续的毅力。就好比前段时间大火的核聚变试验,点火成功,能量转化大于能量输入,这注定是一次不可磨灭的成功——一个好的开始是成功的一半。但同时,聚变最为困难的就是持续控制等离子体,保持高温高压环境,确保聚变稳定发生(请原谅,我是一个不太会举例子的人)。
人做一件事也没有太大的差别,要做好一件事,首先需要一个自己确信的由头,再加上一些破釜成舟的劲头,这已经是极难的了。例如要用上ChatGPT,要买上手机号,与俄罗斯电话号商斗智斗勇,又要准备代理链接,和国内网络代理商斗志斗勇,又要逃避OpenAI的监管,和对岸资本主义斗智斗勇,最后到头来打开了界面,还要头疼得问些什么、怎么问。仿佛是回到新石器时代,重新学怎么使用石锤、石锄,怎样炼铁……
说复杂,倒也不复杂,费劲心思用上了ChatGPT,总能做些什么吧?那倒也说不准。打开问答界面:没有提问的欲望。打开工作文档:找不到提问点。打开VSCode:不知道API能做什么。
空,空空的。
虽说开始做一件事不容易,但它也就是那么一瞬的事。而坚持,是一个周、一个月,是十年的冷板凳。况且,坚持不是一句口号,坚持的途中是不间断的复杂思维与发明创造——没有惊喜的日子谁都过不下去,再要是真的没有惊喜,那就只能自己动手创造。
翻来覆去说了那么多,也就无非那么两句话:万事开头难,修行靠自身。不少成功者恐怕都是这样走来的。
]]>Missed out on the California Gold Rush, missed out on Bitcoin, and it seems like we are about to miss out on the large language models again. But it’s okay, what we miss is just this parallel universe.
Not too late, silently thinking in your heart, someone will definitely find it useful.
Due to strict control by Cloudflare on the web end, there may be a risk of being blocked when reverse proxying. So, we turned to reverse proxying the OpenAI API.
It is quite easy to set up the proxy. First, install Caddy 2.
Write /etc/Caddyfile
:
<host>:<port> {
reverse_proxy https://api.openai.com {
header_up Host api.openai.com
}
}
Where <host>
should be replaced with the IP or domain of the proxy server, and <port>
should be replaced with the listening port.
It is worth noting that OpenAI API’s Cloudflare defense mechanism checks the Host value in the request to determine if the request is indeed sent to OpenAI. If it is not api.openai.com
, a 403 Forbidden error will be returned.
Therefore, you must set header_up Host api.openai.com
to modify the Host in the request header to the corresponding value.
Run sudo systemctl start caddy
to start the Caddy server. You can test if the proxy server is working properly using curl
:
$ curl curl https://<host>:<port>/v1/models
{
"error": {
"message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
The returned message indicates that an API Key is required, indicating successful configuration.
If reverse proxying WOD had a difficulty level of 13, reverse proxying ChatGPT would only score a mere 4!
If you prefer using JSON configuration for Caddy, you can also refer to the following configuration:
{
"admin": {
"disabled": true
},
"logging": {
"logs": {
"log0": {
"writer": {
"output": "stdout"
},
"encoder": {
"format": "console"
},
"level": "WARN"
}
}
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [":<port>"],
"routes": [
{
"match": [{ "host": ["<host>"] }],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "reverse_proxy",
"headers": {
"request": {
"set": {
"Host": ["api.openai.com"]
}
}
},
"transport": {
"protocol": "http",
"tls": {}
},
"upstreams": [{ "dial": "api.openai.com:443" }]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
Going back to the initial question, why did we miss out (please forgive me, I am someone who always likes to ask why)?
It seems there are two main aspects: the initial drive and the perseverance. Just like the recent nuclear fusion experiment that successfully ignited and converted energy greater than the energy input, it is destined to be an indelible success—a good start is half the battle. However, the most difficult part of fusion is to sustainably control plasma, maintain a high-temperature and high-pressure environment, and ensure stable fusion occurs (please forgive me, I am not good at giving examples).
Doing something does not have much of a difference. To do something well, you first need a conviction from within yourself, coupled with some determined efforts, which is already very challenging. For example, to use ChatGPT, you need to purchase a mobile number, outwit the Russian telephone number provider, prepare proxy links, contend with domestic network proxy providers, evade OpenAI’s regulation, and compete intellectually with capitalist enemies across the sea. Finally, when you open the interface, you still need to figure out what to ask and how to ask. It’s like going back to the Neolithic era, relearning how to use a stone hammer, stone hoe, how to smelt iron…
It may sound complex, but it’s not that complicated. After putting in the effort to use ChatGPT, you can always accomplish something, right? Well, that’s uncertain. Open the Q&A interface: no desire to ask questions. Open the work document: can’t find a starting point. Open VSCode: not sure what the API can do.
Empty, utterly empty.
Although starting something is not easy, it’s just a moment of effort. However, persistence is a week, a month, or a decade of waiting. Moreover, persistence is not just a slogan; it involves continuous complex reasoning and creativity—no one can survive without surprises every day. And if there are really no surprises, you have to create them yourself.
After all the talking, it boils down to these two sentences: the beginning is always difficult, but to succeed is not just beginning. Many successful individuals have probably walked down this path.
]]>那咋办,打不过就加入呗,咱们Jellyfin x Manjaro
系列也刷个版本号。
Jellyfin x Manjaro系列第三回只讨论了使用QSV中出现的部份问题;而让FFmpeg用上QSV编码器(手动挡)所介绍的安装方法实在曲折繁琐,只适用于我这样的“五菱高手”——自动挡才是大趋势,手动党难成大业!
说白了,就是缺一篇完整实现QSV加速、使用FFmpeg 6.0、方便快捷干净卫生的教程呗!
开个玩笑,其实,在Manjaro上使用QSV非常容易,因为你需要的、你想要的、你不要的软件包,都有大神半仙提前准备好了。
友情提示,本篇教程只适用于支持
intel-media-driver
的Intel显卡,具体型号列表见Intel Media Driver GitHub仓库。
Intel显卡驱动包括驱动程序intel-media-driver
和前端APIintel-media-sdk
或onevpl
。其中,较新的OneVPL
仅支持11代及以后的核显/独显。
# 11代及以上
sudo pacman -S intel-media-driver onevpl-intel-gpu
# 其余型号
sudo pacman -S intel-media-driver intel-media-sdk
安装完成后,编辑/etc/profile.d/libva.sh
,添加下面两行,告诉系统使用最新的iHD显卡驱动(即intel-media-driver
),而不是已经过时的i965驱动,重启系统使配置生效:
LIBVA_DRIVERS_PATH=/usr/lib/dri
LIBVA_DRIVER_NAME=iHD
随后安装libva-utils
查看驱动识别情况。
sudo pacman -S libva-utils
运行vainfo
命令,如果出现类似下述的输出,则表示驱动已经安装成功。
$ vainfo
Trying display: wayland
Trying display: x11
error: can't connect to X server!
Trying display: drm
vainfo: VA-API version: 1.18 (libva 2.17.1)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.5.2 (ccc137c92)
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileVP8Version0_3 : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointEncSlice
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
如果你的电脑有多张显卡,那么直接运行
vainfo
很可能会报错。此时不妨试试vainfo --display drm --device /dev/dri/renderD12x
,将/dev/dri/renderD12x
替换为正确的显卡文件路径。只要有任意一张显卡支持iHD驱动即可,FFmpeg通常会自动识别并使用其中支持QSV的显卡。
Intel显卡驱动的OpenCL后端目前由intel-compute-runtime
提供,用于将HDR视频转换为SDR播放,Manjaro官方源的版本较老,因此我们使用AUR源安装。
AUR软件源是一个软件包共享平台,用户可以自行提交发布软件包与安装脚本供其他用户使用。使用AUR软件源一般需要首先安装yay包管理工具。
sudo pacman -S --needed git base-devel yay
随后使用yay安装intel-compute-runtime
。
yay intel-compute-runtime
在yay展示的各个选项中选择编译好的intel-compute-runtime-bin
即可。安装完成后,可以使用clinfo
命令查看是否安装成功。
最新发布的Jellyfin 10.8.10修复了两个重要安全漏洞,并且推荐与jellyfin-ffmpeg6组合使用。
AUR已有编译好的jellyfin-bin
软件包供下载,也有nyanmisaka上传的最新版jellyfin-ffmpeg6
。
yay jellyfin-bin jellyfin-ffmpeg6
最后,使用systemd
启动jellyfin,打开http://localhost:8096
即可食用。
# 立刻启动,并配置开机自启
sudo systemctl enable --now jellyfin
在Jellyfin网页界面中进入Dashboard -> Playback
,将硬件加速(Hardware Acceleration)设置为Intel Quick Sync (QSV)
。
参照下图勾选转码相应功能。
Enable hardware decoding for
:对以下视频格式开启硬件解码。应根据显卡实际支持情况进行选择。
Prefer OS native DXVA or VA-API hardware decoders
:解码时使用DXVA或VA-API硬件解码,而不使用QSV加速。使用QSV解码出错时可以勾选。
Enable hardware encoding
:开启硬件解码。需要勾选。
Enable Intel Low-Power H.264 hardware encoder
与Enable Intel Low-Power HEVC hardware encoder
:开启低功耗H.264/HEVC硬件编码器。9代以上的CPU可以尝试勾选这两个选项,以加速HDR转SDR播放。在12代核显上不需要额外进行配置,其他型号请看Jellyfin官方文档。
Allow encoding in HEVC format
:允许使用HEVC格式编码视频。如果你用来观看视频的设备支持HEVC编码,则建议勾选。
参照下图勾选HDR色调映射(Tone Mapping)相关功能,用于HDR视频转SDR播放。
Enable VPP Tone mapping
:VPP色调映射。效率比OpenCL更高,但仅支持HDR10,兼容性较差,不建议勾选。
Enable Tone mapping
:OpenCL色调映射。建议勾选。
到此配置完成。
大环境总是去繁就简的。我小学的时候,家长接送孩子学的都还是手动挡。十年以后的今天,一眼望去,手动挡已经一车难觅。你问我手动挡和自动挡能做的事情有什么不同?我会说,差不离。但自动挡好上手,容易学,让更多的人能够在很短的时间里学会开车,成为自己的旅途的主人。
对于操作系统而言,同样如此——那些开着“自动挡”的操作系统在吸引用户方面具有天然的优势。但Linux不是轿车也不是巴士,而是载人航天——一个永远离不开“手动挡”的地方。Manjaro Linux正在迅速流失用户这个问题是一个悖论——Manjaro不是Steam OS,作为Linux发行版,它的目标不可能,也不应该是服务大多数人。它更像是一个带教员、掌门人,提供便捷的包管理系统,帮助对Linux真正感兴趣的人了解这个操作系统,并基于此了解计算机的工作原理。用户数量究竟多少并不重要,甚至用户的减少意味着有更多的用户已经“出师”,开始使用更加底层的Arch Linux,或者开始使用更加稳定的Linux发行版进行生产工作,甚至可能已经融会贯通,学会了在一些“自动挡”操作系统上实现各种“手动超控”。
或许,现在的我们离不开Manjaro,只是因为我们还是书生。
不如珍惜当下的简单,因为不知何时总要告别。
]]>So what to do? If you can’t beat them, join them! Let’s also boost the version number for the Jellyfin x Manjaro
series.
The third installment of the Jellyfin x Manjaro series only discussed some issues with using QSV, while the installation method highlighted in manually enabling QSV in FFmpeg was quite complex and convoluted. It’s only suitable for “manual-mode pros” like me - automatic mode is the real trend, and manual transmission enthusiasts may find it challenging to succeed!
To put it simply, we need a complete guide that implements QSV acceleration efficiently, utilizes FFmpeg 6.0, and is convenient, quick, and clean!
Just kidding! Actually, using QSV on Manjaro is very easy because the software packages you need, want, and don’t want have already been prepared by the experts.
Friendly reminder, this tutorial is only applicable to Intel GPUs that support
intel-media-driver
, specific model lists can be found on the Intel Media Driver GitHub repository.
Intel GPU drivers include the driver intel-media-driver
and the front-end APIs intel-media-sdk
or onevpl
. The newer OneVPL
only supports 11th generation and newer Intel GPUs.
# For 11th generation and above
sudo pacman -S intel-media-driver onevpl-intel-gpu
# For other models
sudo pacman -S intel-media-driver intel-media-sdk
After installation, edit /etc/profile.d/libva.sh
and add the following two lines to instruct the system to use the latest iHD GPU driver (intel-media-driver
) instead of the outdated i965 driver. Restart the system to apply the configuration:
LIBVA_DRIVERS_PATH=/usr/lib/dri
LIBVA_DRIVER_NAME=iHD
Then install libva-utils
to check the driver recognition.
sudo pacman -S libva-utils
Run the vainfo
command. If you see output similar to the following, it means the driver has been successfully installed.
$ vainfo
Trying display: wayland
Trying display: x11
error: can't connect to X server!
Trying display: drm
vainfo: VA-API version: 1.18 (libva 2.17.1)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.5.2 (ccc137c92)
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
...
If your computer has multiple GPUs, running
vainfo
directly may result in an error. In this case, tryvainfo --display drm --device /dev/dri/renderD12x
, replacing/dev/dri/renderD12x
with the correct GPU file path. As long as one GPU supports the iHD driver, FFmpeg will usually automatically detect and use a GPU that supports QSV.
The OpenCL backend for Intel GPU drivers is currently provided by intel-compute-runtime
to convert HDR videos to SDR for playback. Since the version in the Manjaro official repository is outdated, we will install it from the AUR repository.
The AUR repository is a software package sharing platform where users can submit and publish software packages and installation scripts for others to use. To use the AUR repository, you generally need to install the yay
package manager tool.
sudo pacman -S --needed git base-devel yay
Then use yay
to install intel-compute-runtime
.
yay intel-compute-runtime
Select the pre-compiled intel-compute-runtime-bin
from the options presented by yay
. After installation, you can use the clinfo
command to check if it was installed successfully.
The recently released Jellyfin 10.8.10 addresses two critical security vulnerabilities and recommends using it with jellyfin-ffmpeg6
.
There is a pre-compiled jellyfin-bin
package available in the AUR for download, as well as the latest version of jellyfin-ffmpeg6
uploaded by nyanmisaka.
yay jellyfin-bin jellyfin-ffmpeg6
Finally, start jellyfin using systemd
and open http://localhost:8096
to begin streaming.
# Start immediately and configure to start on boot
sudo systemctl enable --now jellyfin
In the Jellyfin web interface, go to Dashboard -> Playback
, and set Hardware Acceleration
to Intel Quick Sync (QSV)
.
Refer to the images to select the appropriate transcoding functions.
By completing the configuration, Jellyfin should be all set for your needs.
]]>It is almost obvious that extra large language models would perform in most of the tasks better than language models in smaller form factors, such as ChatGLM, Alpaca, Llama, etc., and especially when it comes to logically intense question and answers. Under such assumptions, there is simply no reason for individuals to deploy and use smaller-scale LLMs that would almost definitively perform worse than the flagship models that was made readily and freely available online, such as ChatGPT.
This is not only the alarm bell but the pending death sentence to all the medium-sized models whose sole purpose is to take on flagship models such as ChatGPT - no potential users whatsoever.
Moreover, the future of such medium-sized models are constrained by the limiting terms of service of often their most important data source, that is, ChatGPT. As most of the well-performing medium-sized models rely heavily on data generated by services affiliated with OpenAI, such as Alpaca, Vicuna and Koala. And not to our surprise, even Google Bard was allegedly distilling ChatGPT data to improve their performance.
The deadliest blow comes in the absence of a comprehensive method to evaluate how good the LLMs are actually performing, leaving small-scale LLMs no way to prove their worth. The current evaluation efforts are limited to scores given by GPT-4 (as in Vicuna), human evaluation over a limited set of questions (as in Koala), and the evaluation of the models against traditional NLP test sets. None of these methods are reliable or convincing enough to be considered golden rules to pick the best model. As such, the amount of exposure a model can get decides its rise and fall, leaving small-scale LLMs no chance against established brands starring OpenAI and Microsoft.
The development of LLMs is helplessly sliding into the slope of monopoly. Credible competitors will be scarce. Eventually smaller scale LLMs are going to be only relavent in select use cases such as offline deployment in private firms, or mobile deployment in laptops and phones, where the full-scale LLMs are not available, or where the horsepower of a state-of-the-art LLM is simply not necessary.
OpenAI CEO Sam Altman said in his interview on 13rd, April that the size of LLMs won’t matter as much moving forward. The remarks can be roughly perceived in two different ways.
First, if what he said was based on facts discovered in experiments conducted inside of OpenAI, which is the only possible entity on the earth that has the capability to conduct such research, OpenAI will still be the one and only firm with the capital and access to fine-tune the largest and most performant models in the world, which they own. And that automatically implies unfair competitions to come.
Second, if the signal was just a camouflage, and increasing the size of the model does still give performance boosts. Then it is essentially telling the other competitors to back down from the war of model size and instead research on fine-tuning, which OpenAI can easily snatch and copy whenever the research is published, while they can still work on increasing the quantity and quality of their training data.
In either case, OpenAI wins and the monopoly prevails.
According to an anonymous souce (Xiaoyi Ma), the gap of artificial intelligence technologies between states would only expand in the years to come. And it will be partly because of the fact that the Big NLPs control the biggest data, the biggest infrastructure, and biggest capital, which would only attract for them even more investments and human resources.
What is even more alarming is that the development of open-source medium-sized models are very likely going to be suppressed by the success of the flagship models, given the lack of proper evaluation methods and adequate public exposure, which would ultimately reflect in the slow replication and assimilation of such technologies among the other states.
The introduction of accelerated training frameworks like DeepSpeed might mitigate the uncrossable gap between medium-sized and large-sized models. However, the lack of open data and the fact that the highest quality data comes from ChatGPT still make me wonder if the monopoly can ever be lifted.
]]>办公室的服务器刚好够用。
但大约有45.67%的原因是有一个野蛮之人强取豪夺,妄图再霸占一台;也有大约62.72%的原因是因为有一个胆小之人装聋作哑,不愿牺牲自己的机子。于是,在这个108.39%概率发生的世界线上,多了一个甘愿贡献自己的只因,甘愿吃亏的好心人。
只因终究还是太少。
胆小鬼虽然也有些小气,但听说好心人没有机用,心里不是滋味,却也不好凭空变一台出来。
“既然不能无中生有,那就只能试试‘螺蛳壳里做道场’‘宰相肚里能撑船’了。”
胆小鬼决定,用KVM把一台机子拆两部,一部自己用,一部留给好心人。就算野蛮人再横,也没法再找理由来抢了。
依据胆小鬼的理解,KVM只是一种虚拟机的虚拟化方式,而真正模拟出硬件来供虚拟机使用的依旧是QEMU,真正管理虚拟机的依旧是Libvirt命令。也因此只好先安装QEMU与libvirt。
sudo apt install --no-install-recommends qemu-system libvirt-clients libvirt-daemon-system qemu-utils
--no-install-recommends
:不安装推荐程序包。如果不需要图形化管理工具,可选择此选项。
胆小鬼也怕事,想让自己的非root用户也能管理虚拟机,于是运行以下命令:
sudo adduser <youruser> libvirt
这还不够,如果直接运行虚拟机管理命令virsh
,管理的是当前用户名下的虚拟机,如果要管理root名下的虚拟机,还需要作以下调整。
virsh --connect qemu:///system list --all
这样一来,每次一用命令就得输入一遍--connect qemu:///system
,那还得了?还好可以导入环境变量,让virsh
一心一意管理系统的虚拟机。
# 将以下环境变量声明放进~/.bashrc或者~/.zshrc中
export LIBVIRT_DEFAULT_URI='qemu:///system'
办公室不仅只因不够用,网线也不够。胆小鬼的机器只插着一根网线,得想办法让那一根网线同时供主机和客机上网才行。而且,还得让客机也能分配到内网IP地址,不然还得给好心人做端口映射,那多麻烦事儿!
libvirt的网络配置大致分为三种:
胆小鬼不出所料,选了最简单的桥接网络。
sudo ip link add br0 type bridge # 添加一个名为br0的网桥
sudo ip link set <device> up # 启用一个网络设备,如网口enp0s2
sudo ip link set <device> master br0 # 将设备添加到网桥
sudo ip address add dev br0 192.168.1.142/24 # 将主机网桥的IP设置为192.168.1.142
如此,桥接已经配置好,但重启就会失效,要叫他保持下去,得用bridge-utils
软件包。
sudo apt install bridge-utils
随后,配置网络界面。例如原先使用的网口为enp0s2,那么就将原有的iface enp0s2 inet dhcp
一行替换为如下内容:
# 将网络接口enp0s2设置为手动配置,以防与NetworkManager产生冲突
iface enp0s2 inet manual
# 配置网桥br0
auto br0
iface br0 inet static
bridge_ports enp0s2
address 192.168.1.142
broadcast 192.168.1.255
netmask 255.255.255.0
gateway 192.168.0.1
使用systemd重启network服务,网络配置就生效了。
sudo systemctl restart network
接下来,要让libvirt的虚拟机使用网桥br0,还需要对网桥进行声明。
首先创建一个br0-bridge.xml
文件,内容如下:
<network>
<name>br0-bridge</name>
<forward mode="bridge" />
<bridge name="br0" />
</network>
然后运行virsh
命令导入声明配置。
virsh net-define br0-bridge.xml
使用virsh net-list --all
就可以看到现有的全部网络。
虽说胆小怕事,但胆小鬼也是出了名的心细。好心人一直以来用的都是正黄旗的Ubuntu Desktop,gnome桌面环境。这次是为了好心人有机用才硬着头皮捣鼓KVM,可不得把只因做成他爱用的样子?
不巧,胆小鬼自己的只因原本运行着没有图形界面的Debian,而Ubuntu Desktop安装时又恰好需要图形界面,无奈之下,只好退而求其次,安装Ubuntu Server,图形界面另外解决。
首先下载好系统镜像,然后安装libosinfo-bin
软件包,帮助virt-install
命令识别系统版本:
sudo apt install libosinfo-bin
可以运行osinfo-query os
命令来查看virt-install
支持的系统版本。由于Ubuntu 22.04不在libosinfo-bin
软件包自带的列表中,还需要从libosinfo托管网站手动下载更新osinfo的数据库。
wget -O "/tmp/osinfo-db.tar.xz" "https://releases.pagure.org/libosinfo/osinfo-db-20221130.tar.xz"
osinfo-db-import --user "/tmp/osinfo-db.tar.xz"
--user
:数据库导入位置。如果选择--user
,数据库将会存储在~/.config/osinfo
,如果选择--local
,数据库将存储在/etc/osinfo
,如果选择--system
,数据库将存储在/usr/share/osinfo
。
编辑虚拟机安装命令:
virt-install --virt-type kvm --name <domain-name> \
--location <path/to/ubuntu-22.04.iso>,kernel=casper/vmlinuz,initrd=casper/initrd \
--os-variant ubuntu22.04 \
--vcpu 10,maxvcpus=20 --cpu host \
--disk size=120 --memory 4096 \
--network br0-network \
--graphics none \
--console pty,target_type=serial \
--extra-args "console=ttyS0"
--name
:虚拟机的名字,也称作域名(domain name)。
--location
:系统镜像位置。可以是网络位置,如https://cn.archive.ubuntu.com/ubuntu/dists/jammy/main/installer-amd64/
,也可以是本地路径;--cdrom
参数同样可以指定系统镜像,但只支持本地路径。在没有图形界面的环境中,必须使用--extra-args
的自定义内核参数开启串行控制台(Serial console)来安装系统,而--cdrom
又恰好不支持自定义内核参数,因此只能使用--location
参数。由于--location
选项不能自动识别镜像中的内核位置,所以需要手动指定kernel=casper/vmlinuz,initrd=casper/initrd
。
--os-variant
:操作系统类型。可以运行osinfo-query os
命令来查看支持的版本。
--vcpus
:初始CPU线程数。KVM虚拟机中的每一个虚拟线程绑定一个真实线程,所以设置超过真实线程数的虚拟CPU数量是没有意义的。但一个真实线程可以被同时分配到多个虚拟机中,通过调度器完成多个虚拟机指派的工作,这也是CPU可以超售(CPU oversell)的由来。
--cpu
:CPU配置。可以配置CPU的型号与特性。当型号设置为host
时,虚拟机将拥有主机CPU的所有特性,但也可能会导致无法在线迁移(live migration)。
--disk opt1=val1,opt2=val2,...
:虚拟机存储设备。可以通过size
选项设置大小,也可以通过path
选项设置路径。
--memory
:内存大小。
--network
:选择网络。选择刚才创建的桥接网络br0-network。
--graphics
、--console
、--extra-args
:为虚拟机配置一个串行控制台,用于在没有图形界面的情况下从主机直接操作虚拟机。
运行上述命令,进入安装界面,选择基本模式(basic mode),就可以通过文本控制台安装系统了。
原本好心人可以直接坐到自己的服务器面前,泡杯水,就着温吞的开水不紧不慢地点亮屏幕,配置环境或是运行代码。
但由于胆小鬼没有安装桌面环境,好心人也就没法打开虚拟机的图形界面。胆小鬼虽说怕事,但心里一衡量,如果把远程桌面装好,那还比原来坐到服务器面前操作更方便,收益大于成本。于是便咬咬牙开始帮好心人配置。
首先VNC需要一个桌面环境——别忘了刚才安装的是Ubuntu Server!胆小鬼为好心人量身选择了他爱用的gnome。
sudo apt install gnome-session gdm3 # 安装gnome桌面环境与窗口管理器gdm3
sudo apt install ubuntu-desktop # 安装桌面环境必须的各个软件包
sudo systemctl set-default multi-user.target # 不要默认启动图形环境
VNC服务端选用了TigerVNC。
sudo apt install tigervnc-standalone-server dbus-x11
配置~/.vnc/xstartup
:
#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
export DESKTOP_SESSION=/usr/share/xsessions/ubuntu.desktop
export XDG_CURRENT_DESKTOP=ubuntu:GNOME
export GNOME_SHELL_SESSION_MODE=ubuntu
export XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
dbus-launch --exit-with-session /usr/bin/gnome-session --systemd --session=ubuntu
--systemd
:如果gnome-shell版本低于3.40,则需要省去该参数。
配置~/.vnc/config
:
session=ubuntu
geometry=1920x1080
localhost
alwaysshared
alwaysshared
:所有客户端都会连接到同一个会话。
配置/etc/systemd/system/vncserver@.service
:
[Unit]
Description=Start TigerVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=<youruser>
Group=<youruser>
WorkingDirectory=/home/<youruser>
PIDFile=/home/<youruser>/.vnc/%H:%i.pid
ExecStartPre=-/bin/sh -c "/usr/bin/vncserver -kill :%i > /dev/null 2>&1"
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1920x1080 -localhost :%i
ExecStop=/usr/bin/vncserver -kill :%i
Restart=on-success
RestartSec=10
[Install]
WantedBy=multi-user.target
-localhost
:仅允许从本机访问VNC。如果需要远程访问,则需要配合SSH安全隧道进行转发。
Restart
、RestartSec
:当客户端进行注销操作时,服务端就会自行退出,如果希望服务端继续运行,则需要添加上述重启参数。
最后,通过systemd开启VNC服务端:
sudo systemctl enable --now vncserver@1
@1
:会话编号。会话编号设为1,则端口号为5901;编号为2,端口号为5902,以此类推。
最后,建立SSH安全隧道,连接VNC:
ssh <youruser>@<serverip> -L 9901:localhost:5901
用VNC客户端连接localhost:5901
,画面似乎有些模糊,将客户端的质量设为“高”,一切都很美好。
到这里,胆小鬼已经花了整整一天的时间,就为了给好心人装配一台KVM虚拟机(而且是带远程访问的那种)。如果你要问他为什么要做这些,他或许会回答你,“只是因为在人群中多看了你一眼,我就知道你是好心人。对好心人,自然不能太差劲。”
胆小鬼鼓足勇气,想告诉好心人自己为他准备了一个非常好用的虚拟机。但看见好心人在专心忙活别的事情,胆小鬼走到他边上,欲言又止,转身又走了开去,好像什么也没发生过。
]]>The office server is just enough.
But about 45.67% of the reason is that there is a barbarian who is trying to snatch a machine, wanting to occupy another one; and about 62.72% of the reason is that there is a coward who pretends to be deaf and dumb, unwilling to sacrifice his own machine. So, in this world line where the probability is 108.39%, there is an individual who is willing to contribute, willing to be the benevolent one who takes a loss.
But it turns out to be still too little.
Although the coward is a bit stingy, hearing that the good-hearted person has no machine to use, it feels uncomfortable, but it’s not easy to conjure a machine out of thin air.
“Since you can’t make something out of nothing, then you can only try ‘building a pagoda in a snail shell’ or ‘sailing a boat in the prime minister’s belly.’”
The coward decides to use KVM to split one machine into two, one for themselves and one for the kind-hearted individual. Even if the barbarian acts up again, they won’t be able to find a reason to snatch it again.
According to the timid one’s understanding, KVM is just a way of virtualizing a virtual machine, and the actual simulation of hardware for virtual machines to use is still done by QEMU, with Libvirt commands being the true managers of virtual machines. Therefore, it is necessary to first install QEMU and libvirt.
sudo apt install --no-install-recommends qemu-system libvirt-clients libvirt-daemon-system qemu-utils
--no-install-recommends
: Do not install recommended packages. If graphical management tools are not needed, this option can be selected.
As the timid one is cautious, they want their non-root user to be able to manage virtual machines as well. Therefore, run the following command:
sudo adduser <youruser> libvirt
But that’s not enough. If running virtual machine management command virsh
, it manages virtual machines under the current username. In order to manage virtual machines under the root name, some adjustments need to be made.
virsh --connect qemu:///system list --all
As a result, every time a command is used, you have to enter --connect qemu:///system
. How inconvenient is that? Luckily, environment variables can be imported to allow virsh
to manage the system’s virtual machines only.
# Place the following environment variable declaration in ~/.bashrc or ~/.zshrc
export LIBVIRT_DEFAULT_URI='qemu:///system'
Reference: Debian Wiki/KVM.
The office lacks more than just space; there’s also a shortage of network cables. The timid one’s machine only has one network cable plugged in, so there must be a way for that single cable to provide internet access for both the host and guest machines. Moreover, the guest machine needs to be able to assign internal IP addresses, or else generous souls will have to handle port mapping, which is quite a hassle!
Network configurations in libvirt are roughly divided into three types:
As expected, the timid one chose the simplest Bridged Network configuration.
sudo ip link add br0 type bridge # Add a bridge named br0
sudo ip link set <device> up # Enable a network device, such as the enp0s2 interface
sudo ip link set <device> master br0 # Add the device to the bridge
sudo ip address add dev br0 192.168.1.142/24 # Set the host bridge's IP to 192.168.1.142
With the bridge now configured, but it will become ineffective after a restart. To keep it active, the bridge-utils
software package must be used.
sudo apt install bridge-utils
Next, configure the network interface. For example, if the original interface used was enp0s2, replace the original iface enp0s2 inet dhcp
line with the following content:
# Set network interface enp0s2 to manual configuration to avoid conflicts with NetworkManager
iface enp0s2 inet manual
# Configure bridge br0
auto br0
iface br0 inet static
bridge_ports enp0s2
address 192.168.1.142
broadcast 192.168.1.255
netmask 255.255.255.0
gateway 192.168.0.1
Use systemd to restart the network service, and the network configuration will take effect.
sudo systemctl restart network
Then, in order for libvirt’s virtual machines to use the br0 bridge, the bridge needs to be declared.
First, create a br0-bridge.xml
file with the following content:
<network>
<name>br0-bridge</name>
<forward mode="bridge" />
<bridge name="br0" />
</network>
Then run the virsh
command to import the declared configuration.
virsh net-define br0-bridge.xml
Use virsh net-list --all
to view all existing networks.
Reference: Bridged Networking with libvirt.
Although the timid one is fearful, they are also known for being meticulous. The generous souls have always used the genuinely popular Ubuntu Desktop with the GNOME desktop environment. This time, in order for the generous souls to have an organic experience, the timid one reluctantly dabbled with KVM without a graphical interface. Shouldn’t the virtual machine be configured to their liking?
However, fate has it that the timid one’s own system originally ran a non-graphical Debian, and Ubuntu Desktop installation requires a graphical environment. Unfortunately, as a compromise, Ubuntu Server was installed with the graphical interface to be solved separately.
First, download the system image, then install the libosinfo-bin
package to help the virt-install
command recognize the system version:
sudo apt install libosinfo-bin
Run the osinfo-query os
command to view the system versions supported by virt-install
. Since Ubuntu 22.04 is not included in the list provided by the libosinfo-bin
package, it is necessary to manually download and update the osinfo database from the libosinfo hosting site.
wget -O "/tmp/osinfo-db.tar.xz" "https://releases.pagure.org/libosinfo/osinfo-db-20221130.tar.xz"
osinfo-db-import --user "/tmp/osinfo-db.tar.xz"
--user
: Database import location. Choosing --user
stores the database in ~/.config/osinfo
, while choosing --local
stores it in /etc/osinfo
, and selecting --system
stores it in /usr/share/osinfo
.
Edit the virtual machine installation command:
virt-install --virt-type kvm --name <domain-name> \
--location <path/to/ubuntu-22.04.iso>,kernel=casper/vmlinuz,initrd=casper/initrd \
--os-variant ubuntu22.04 \
--vcpu 10,maxvcpus=20 --cpu host \
--disk size=120 --memory 4096 \
--network br0-network \
--graphics none \
--console pty,target_type=serial \
--extra-args "console=ttyS0"
--name
: The name of the virtual machine, also known as the domain name.
--location
: System image location. It can be a network location, such as https://cn.archive.ubuntu.com/ubuntu/dists/jammy/main/installer-amd64/
, or a local path; the --cdrom
parameter can also specify the system image, but only supports local paths. In environments without a graphical interface, it is necessary to use custom kernel parameters to enable a serial console with --extra-args
for system installation, as --cdrom
does not support custom kernel parameters, hence the need for --location
.
Since --location
cannot automatically detect the location of the kernel in the image, the kernel=casper/vmlinuz,initrd=casper/initrd
must be manually specified.
--os-variant
: Operating system type. Use the osinfo-query os
command to view supported versions.
--vcpus
: Initial number of CPU threads. Each virtual thread in a KVM virtual machine is bound to a real thread, so setting the virtual CPU quantity to exceed the real thread count is meaningless. However, a real thread can be simultaneously allocated to multiple virtual machines, with the scheduler handling the assigned tasks of multiple virtual machines, which is why CPU overselling is possible.
--cpu
: CPU configuration. The CPU model and features can be configured. When the model is set to host
, the virtual machine will have all the features of the host CPU, but it may also prevent live migration.
--disk opt1=val1,opt2=val2,...
: Virtual machine storage device. Size can be set using the size
option or the path can be defined using the path
option.
--memory
: Memory size.
--network
: Select the network. Choose the previously created bridge network br0-network.
--graphics
, --console
, --extra-args
: Configure a serial console for the virtual machine to operate directly from the host without a graphical interface.
By running the above command, you can enter the installation interface, select basic mode, and install the system via a text console.
See virt-install(1).
Originally, the generous soul could sit in front of their server, pour a cup of water, leisurely turn on the screen, and configure the environment or run code with lukewarm water in hand.
But since the timid one did not install a desktop environment, the generous soul could not open the virtual machine’s graphical interface. After some consideration, the timid one realized that installing remote desktop access would be more convenient than physically sitting in front of the server. Thus, they gritted their teeth and began configuring it for the generous soul.
First, VNC requires a desktop environment—don’t forget that Ubuntu Server was just installed! The timid one chose the GNOME desktop environment tailored to the generous soul’s preferences.
sudo apt install gnome-session gdm3 # Install the gnome desktop environment and gdm3 window manager
sudo apt install ubuntu-desktop # Install various packages necessary for the desktop environment
sudo systemctl set-default multi-user.target # Do not start the graphical environment by default
For the VNC server, TigerVNC was chosen.
sudo apt install tigervnc-standalone-server dbus-x11
Configure ~/.vnc/xstartup
:
#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
export DESKTOP_SESSION=/usr/share/xsessions/ubuntu.desktop
export XDG_CURRENT_DESKTOP=ubuntu:GNOME
export GNOME_SHELL_SESSION_MODE=ubuntu
export XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
dbus-launch --exit-with-session /usr/bin/gnome-session --systemd --session=ubuntu
--systemd
: Only required if the gnome-shell version is below 3.40.
Configure ~/.vnc/config
:
session=ubuntu
geometry=1920x1080
localhost
alwaysshared
alwaysshared
: All clients will connect to the same session.
Configure /etc/systemd/system/vncserver@.service
:
[Unit]
Description=Start TigerVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=<youruser>
Group=<youruser>
WorkingDirectory=/home/<youruser>
PIDFile=/home/<youruser>/.vnc/%H:%i.pid
ExecStartPre=-/bin/sh -c "/usr/bin/vncserver -kill :%i > /dev/null 2>&1"
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1920x1080 -localhost :%i
ExecStop=/usr/bin/vncserver -kill :%i
Restart=on-success
RestartSec=10
[Install]
WantedBy=multi-user.target
-localhost
: Allows VNC access only from the local machine. For remote access, SSH secure tunneling is required for forwarding.
Restart
, RestartSec
: When a client logs out, the server will automatically exit. If the server should continue running, add the restart parameters specified above.
Lastly, start the VNC server via systemd:
sudo systemctl enable --now vncserver@1
@1
: Session number. Setting the session number to 1 will use port 5901; number 2 will use port 5902, and so on.
Finally, establish an SSH secure tunnel and connect to VNC:
ssh <youruser>@<serverip> -L 9901:localhost:5901
Connect to localhost:5901
with a VNC client, set the client’s quality to “high” if the screen is somewhat blurry, and everything will be splendid.
Here, the timid one has spent an entire day just to set up a KVM virtual machine for a kind-hearted person (and it’s the kind with remote access). If you were to ask him why he’s doing all this, he might reply, “Just because I took a second glance at you in the crowd, I knew you were a kind-hearted person. For kind-hearted people, I can’t be too incompetent.”
With courage, the timid one wanted to tell the kind-hearted person that he prepared a very user-friendly virtual machine for him. But seeing the kind-hearted person busy with other tasks, the timid one hesitated, walked up to him, struggled to speak, then turned and walked away as if nothing had happened.
]]>我一直以为这个周末会和夏天所有的周末一样,在炎热与焦躁不安中度过,却没想到又收到了姐妹的微信,问还有没有MC的整合包。
翻箱倒柜找到去年的目录,文件都还在,只是已经忘记了如何打开——在M1上原生运行Minecraft一直都是个头疼的事儿。
容我再研究研究。
“世事无常”,Hello Minecraft! Launcher (HMCL)已经支持调用自定义库文件,LWJGL 3.3.0往后已经出厂自带macOS-ARM64原生组件,ManyMC启动器实现库文件替换自动化。
一眼望去,似乎日新月异,欣欣向荣。只可惜金玉其外,败絮其中,世事如常尔尔。Minecraft至今未有动过为1.19以前的Java版本支持macOS-ARM的念头。想要原生运行Minecraft还得“吃自助”——自己解决LWJGL原生库。
曾经又有一位哲人曾经说过,“靠自己,靠别人是没有幸福的”。多亏yaoxi-std,HMCL要想支持调用动态库文件已经有了现成的解决方案,但调用JAR库文件似乎还没有头绪,每次手动复制也不是办法,万一有哪次忘记关闭“检查游戏文件”,就是前功尽弃。
yusefnapora的MultiMC自动脚本的脚本给了我启发,HMCL的包装命令(Wrapper Command)功能同样可以实现每次启动时根据版本修改JVM参数,加载相应的原生库文件。
来活了。
为什么M1 MultiMC Hack历史已经如此悠久,HMCL就不能有M1 HMCL Hack?是牌面不足吗?今天我就要打破这一魔咒,M1 HMCL Hack今日Debut。
好像押韵了。
包装命令可以自动识别Minecraft版本,将原有JVM启动参数中的库文件路径替换为正确的原生库文件路径。使用方便,如假包换。
友情提示:虽然咱们名字叫M1 HMCL Hack,从理论上来说M2应该还是兼容的。
首先至Azul官方网站下载Zulu JDK,下载时可以选择.dmg
安装包直接安装。架构需要选择ARM 64-bit
,另外,由于HMCL需要调用OpenJFX,包类型需要选择JDK-FX
。Java版本则需要依据Minecraft版本来选择。经过我的实际测试,Minecraft与Java的兼容性大致如下。
Minecraft | Java | LWJGL |
---|---|---|
1.19 | >= 17 | 3.3.1 |
1.18 | >= 17 | 3.3.1 |
1.17 | >= 17 | 3.2.3 |
1.16 | >= 8 | 3.3.1 |
1.12 | >= 8, <= 11 | 2.9.4 |
1.10 | 8 | 2.9.4 |
1.7 | 8 | 2.9.4 |
JDK的各个版本间可以共存,我全都要党胜利依旧。
接下来,下载HMCL,落笔时,最新版为3.5.3.211。
克隆M1 HMCL Hack的仓库到本地。
git clone https://github.com/DotIN13/m1-hmcl-hack.git
打开HMCL,下载或者导入一个Minecraft实例。
然后进入实例设置,勾选“启用游戏特定设置”(Enable per-instance settings)。
再根据上文中的表格选择合适的“Java路径”(Java Path),注意选择第一步中安装的ARM架构Java版本。
向下滚动页面,找到高级设置中的“包装命令”(Wrapper Command),填入/usr/bin/ruby /path/to/index.rb
,此处/path/to/index.rb
应为index.rb
文件在本地的路径。运行游戏时,HMCL会将当前目录切换到.minecraft
目录下,因此,如果填写相对路径,因当以.minecraft
作为起点。
最后,由于HMCL默认禁止在ARM架构的macOS中使用ARM架构Java启动Minecaft,因此需要在页面最底部勾选“不检查JVM与游戏的兼容性”。
至此,点击开始游戏,Minecraft就可以以Nosetta(No-rosetta)状态运行了。对于任何新安装的Minecraft版本,只需要重新操作第三步,即可继续纵享丝滑。
你可能会问我为什么要做HMCL Hack。我可能会回答你,我想让更多和我一样拥有Apple Silicon,喜欢HMCL的玩家玩上更加流畅的Minecraft。但这一切都有必要吗?或者说,在实然上确实有必要,那么在应然的角度上来看呢?
Microsoft应该带头解决Minecraft的兼容问题。GLFW应该在小版本更新时考虑到向后兼容性。LWJGL应该对旧版本进行重发布来兼容ARM架构而不是通过大版本更新来解决,以免导致使用旧版库的所有应用都无法在macOS ARM-64bit上运行。
对于Minecraft这个特别的游戏来说,每个游戏版本无论新旧都有庞大的玩家群体,向后兼容性至关重要。但即便如此,整个上下游依旧各自为阵,到今天为止也只解决了最新版本的兼容性问题,对旧版本选择了全然无视。
实然与应然有如隔海,理想的家园不知所在。开源的本质是爱好者的分享,这也就好比计算机软件的拥有者与掌握者与使用者签订了契约,将使用和修正的权力赋予用户。但与一般意义上的社会契约不同,用户缺少了推翻“统治者”的权力。虽然开源隐喻着开放,但代码的维护者依旧保有着全部的权力,即便用户可以修正他们所使用的计算机软件,他们也不能对其他用户所使用的软件造成实质的影响,缺乏公信力、缺乏共享渠道,导致了这些用户进行的修正最终只能停留在自娱自乐。这就好比用户只是一个制造假币的小丑,而币制的制定权依旧掌握在某些“爱好者”的手里。
开源不是一个完全去中心化的过程,它只是将中心进行了转移,从邪恶的大厂手中,转移到了另一部分人手中。如果这些开发者与用户的想法背道而驰,那么失去替代选项的用户也只得言听计从。
你一言我一语自然是不能成事,但开源是不是也只是“少数人的暴政”披上了羊皮?
]]>