<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://www.wannaexpresso.com/feed.xml" rel="self" type="application/atom+xml"/><link href="https://www.wannaexpresso.com/" rel="alternate" type="text/html"/><updated>2024-04-23T12:07:21+08:00</updated><id>https://www.wannaexpresso.com/feed.xml</id><title type="html">LACKAWANNA EXPRESSO</title><subtitle>Act local, think global with DotIN13!</subtitle><author><name>DotIN13</name></author><entry><title type="html">这就是上海</title><link href="https://www.wannaexpresso.com/2023/09/02/cest-la-shanghai/" rel="alternate" type="text/html" title="这就是上海"/><published>2023-09-02T00:00:00+08:00</published><updated>2023-09-02T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2023/09/02/cest-la-shanghai</id><content type="html" xml:base="https://www.wannaexpresso.com/2023/09/02/cest-la-shanghai/"><![CDATA[<p>周三，我像往常一样坐公交上班。车里略显拥挤。</p><p>我倒没觉得多怪——边上几个“上班族”、“学生党”大多熟悉面孔，其余的多半是今天特地出门赶菜场、赶超市、赶医院的阿姨爷叔。</p><p>虽说已经过了立秋，上海这天气依旧炎热，早晨的太阳仅用三成功力，已将马路烤得热气蒸腾，把尽头的筒子楼熏得变了形。公交车里则是上海特有的湿热，外头的热浪从窗口钻进来，混着车里的潮气与汗味儿，让人有些喘不过气来。</p><h2 id="阿姨">阿姨</h2><p>“司机空调开了伐啦？”几乎是意料之中，身边的阿姨已经耐不住性子，声音穿过十数人直逼驾驶室。</p><p>“啊？”</p><p>“空调！”</p><p>“开了呀！呐窗子开了组撒啦，关特依呀，开了窗空调难能打得起来啦！”</p><p>“窗外头比里厢风凉，搞撒么事啊！”另几个阿姨爷叔也开始议论纷纷，试图给司机施加压力。</p><p>“个么呐自家看呀，空调肯定开了呀，呐开窗设宜么就开窗好伐啦？”</p><p>阿姨显然对司机的解释并不满意，但无奈中间隔了一车厢的人，没法到驾驶室当面对质，嘟囔了几句就悻悻地掏出手机刷起了短视频。</p><p>车程几乎过半，路过好几个老小区，车上的老阿姨老师傅愈渐多了起来，偶尔见着有腿脚不便的，心里总想着起来让个座。不过我坐在靠里的位置。眼朝身边的阿姨一瞥，见她依旧紧锁眉头，便也不好叫她起来，只好放下这个念头，把头埋进手机里。好在他们大多过了几站便下车了，站这么一会估计尚能够承受。</p><h2 id="老者">老者</h2><p>我还沉浸在自我开导当中，突然听得车前头传来一声炸响，“撒宁帮这个老师傅让个座！”猛地一抬头，看见驾驶室门大开着，司机师傅一手叉腰一手扶门，正在朝我这边喊过来。</p><p>我过了约莫半分钟才回过神来，靠近后门的地方站着一位身形佝偻的老者，六、七十岁的样子，褐色的灯芯绒上衣翻毛得厉害，黑色的粗麻长裤好几处已经褪成了白色，标签都未及撕去的金丝眼镜背后是一双空洞的眼睛。</p><p>没等我起身，另一个年轻人已经为他让了座。司机师傅多少带着些得意，“霞霞个位年轻人！”</p><p>但老者似乎并不领情，仿佛什么都没听到似的依旧紧紧攥着栏杆，双眼无神而笃定地望着窗外。</p><p>“跟你说话呢”，我身边的阿姨用上海口音的普通话尖声说道，几乎是要把刚才因为空调憋的气一股脑撒在老者身上，“司机叫这个年轻人给你让座！”</p><p>老者抬抬眉毛，似乎反应了过来；我也明白了——老者听不懂司机说的上海话。</p><p>他挪着步子走向年轻人刚刚让出的座位，嘴里嘟哝了一句：“原来是让我坐，我还以为是怀疑我呢。”声音虽小，在公交车这铁皮盒子里听起来却格外的响亮。起初大家只是窸窸窣窣地议论，到这会仿佛是能量聚集爆发了似的，统统大笑了起来。</p><p>身边的上海阿姨穷追不舍，“人家司机是对你好，叫这个年轻人给你让座，你倒还以为是怀疑你！”</p><p>“谢谢啊”，不知是不好意思，还是只是顺水推舟，老者从他仅剩两颗大牙的嘴里好不容易憋出这么三个字来。</p><p>阿姨似乎察觉到了老者的不以为然，大声回应道，“对呀，这就是上海呀！”</p><p>司机师傅又经过阿姨的翻译问了问老者在哪里下车，嘱咐他到站了慢点走，这才满意地坐回他的宝座，继续朝前开去。</p><p>阿姨这句话真是激起了我全身的鸡皮疙瘩，这二十余年里，从未有人这么坚定地说出过这样的话，更不用说让我真正感觉到上海是这样一个包容、博爱、海纳百川的城市。是啊，阿姨说得对，这就是上海嘛！</p><h2 id="女士">女士</h2><p>我正回味着上海的种种美好，司机那头又传来喧闹声——原来是有位年轻女士没决定好要不要上车，在门口看地图，耽误了司机关门开车。</p><p>那位女士连连道歉，原本善解人意的司机师傅这回竟丝毫不买账，开足火力厉声批评。“你站在这我怎么开车？”“你让全车的人等你？”“你就不能提前看好地图？”女士根本不敢支声，慌里慌张地刷了卡就到后面来站着了。不料司机师傅依旧穷追不舍，质问声混着热浪扑过来。</p><p>我身边的阿姨又坐不住了，仿佛她才是这班公交的乘务员，“司机侬开呀，人家对伐起啊岗好了呀！一车子的人都交给你了，专心开车！”阿姨话音落下，车上众人也拧上了发条一般，东一句西一句地小声附和。</p><p>在大家的努力下，车总算开了。</p><h2 id="到站">到站？</h2><p>车离终点越来越近，我却有些恍惚，总觉得我才刚刚上车。</p><p>临到站，我背上包预备下车。意外的是，身边的阿姨早已察觉，客气地收起手机，起身为我让道。我几乎有些惶恐，站起来就闷头往后门走。</p><p>车门外是杨浦随处可见的梧桐，灼热的空气模糊着视线，像每个上海的夏天一样。我竟觉得有些陌生。</p><p>到站了，我下了车，回过身看着车缓缓驶离，却又觉得自己好像从来没下过车。</p><p>也许这才是上海，有包容、温情，也有小鸡肚肠、表面和气。车上车下，没有哪一件事是全部的上海，却每一件事都是上海。</p>]]></content><author><name>DotIN13</name></author><category term="Life"/><category term="Short Story"/><summary type="html"><![CDATA[周三，我像往常一样坐公交上班。车里略显拥挤。]]></summary></entry><entry><title type="html">C’est la Shanghai</title><link href="https://www.wannaexpresso.com/en-us/2023/09/02/cest-la-shanghai/" rel="alternate" type="text/html" title="C’est la Shanghai"/><published>2023-09-02T00:00:00+08:00</published><updated>2023-09-02T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/en-us/2023/09/02/cest-la-shanghai</id><content type="html" xml:base="https://www.wannaexpresso.com/en-us/2023/09/02/cest-la-shanghai/"><![CDATA[<p>On Wednesday, I took the bus to work as usual. The bus was a bit crowded.</p><p>I didn’t think much of it - most of the people around me were familiar faces, mostly office workers and students, with others being aunties and uncles out for grocery shopping, to the market, or to the hospital.</p><p>The Beginning of Autumn has passed, yet Shanghai’s weather remained hot. The morning sun only used a fraction of its strength, already making the road sizzle and causing the distant high-rise buildings to distort from the heat. Inside the bus, there was the unique humid heat of Shanghai, with the hot air outside seeping in through the windows, mixed with the dampness and sweat inside the bus, making it hard to catch one’s breath.</p><h2 id="aunties">Aunties</h2><p>“Driver, why isn’t the air conditioning on?” Almost as expected, the auntie beside me couldn’t hold back and her voice pierced through the others, reaching the driver.</p><p>“Huh?”</p><p>“The air conditioning!”</p><p>“It’s on! If you open the windows, it won’t work well, just close the window, open window air conditioning can’t function properly!”</p><p>“Why is it cooler outside the window than inside? What’s the deal?” Other aunties and uncles started to comment, trying to put pressure on the driver.</p><p>“You see it yourself, the air conditioning is definitely on, why open the windows if it’s better to keep them closed?”</p><p>The auntie was clearly not satisfied with the driver’s explanation, but with people in between, she couldn’t confront him directly in the driver’s seat. She muttered a few words and took out her phone to watch short videos.</p><p>As the journey continued and we passed several older neighborhoods, more elderly people got on the bus. Occasionally, when I saw someone less mobile, I would think of giving up my seat. However, I was sitting closer to the inner part. Glancing at the auntie beside me, who still had a furrowed brow, I didn’t want to ask her to stand up. I just put that thought aside and buried my head in my phone. Luckily, most of them got off after a few stops, so I could endure standing for a while.</p><h2 id="elderly-man">Elderly Man</h2><p>Lost in my thoughts, I suddenly heard a loud noise from the front of the bus - it turned out that the driver was asking the elderly man standing near the back door to take a seat!</p><p>After about half a minute, I realized what was happening. Standing at the back was an elderly man, in his sixties or seventies, with rumpled brown corduroy jacket and black coarse linen pants that had turned white in several places, with a pair of hollow eyes behind his wire-framed glasses.</p><p>Before I could get up, another young man had already offered his seat to him. The driver seemed somewhat pleased, “Well done, young man!”</p><p>But the elderly man seemed indifferent, as if he had not heard anything, still holding onto the railing tightly, his eyes empty and unwavering as he gazed outside the window.</p><p>“He’s talking to you,” the auntie beside me said loudly in Shanghainese Mandarin, almost venting the frustration she had due to the air conditioning issue onto the elderly man, “the driver asked this young man to let you have a seat!”</p><p>The elderly man raised his eyebrows, seemingly understanding now. I also understood - he couldn’t understand the Shanghainese Mandarin spoken by the driver.</p><p>He shuffled towards the seat that the young man had just vacated and muttered, “So he wanted me to sit, I thought he was suspecting me.” His voice, although quiet, sounded exceptionally loud in the metal box of the bus. Initially, people were whispering, but now it seemed like an eruption of energy, and everyone burst into laughter.</p><p>The Shanghai auntie beside me persisted, “The driver is being nice to you, asking this young man to give up his seat for you, and you thought he was suspecting you!”</p><p>“Thank you,” whether out of embarrassment or just going with the flow, the elderly man managed to squeeze out these three words from his mouth, with only two teeth remaining.</p><p>The auntie seemed to notice the elderly man’s reluctance and loudly responded, “Yes, this is Shanghai!”</p><p>The driver, through the auntie’s translation, asked the elderly man where he needed to get off and instructed him to walk slowly when he reached his stop. Satisfied, the driver returned to his seat and continued driving forward.</p><p>The auntie’s words sent chills down my spine. In over twenty years, no one had been so resolute saying such words, let alone making me truly feel that Shanghai is such a tolerant, loving, and inclusive city. Yes, the auntie was right, this is Shanghai!</p><h2 id="lady">Lady</h2><p>As I was savoring the beauty of Shanghai, there was suddenly a commotion from the driver’s end - it turned out a young lady was unsure whether to board the bus, standing at the door looking at her map, delaying the driver from closing the door and moving.</p><p>The lady repeatedly apologized, but the usually understanding driver didn’t take it lightly, criticizing her sternly. “How can I drive with you standing here?” “You want the whole bus to wait for you?” “Can’t you check the map before?” The lady didn’t dare to utter a word, swiped her card in a hurry, and went to stand at the back. However, the driver continued to chase after her, his questioning mixed with the heat of the moment.</p><p>The auntie beside me couldn’t sit still, as if she was the bus attendant, “Driver, start driving, can’t you be more patient with her? You’re responsible for everyone on this bus, focus on driving!” As the auntie spoke, everyone else on the bus started to murmur, contributing their opinions in hushed tones.</p><p>With everyone’s effort, the bus finally started moving.</p><h2 id="are-we-there-yet">Are We There Yet?</h2><p>As the bus inched closer to the final stop, I felt somewhat dazed, as if I had just gotten on the bus.</p><p>As we approached the stop, I prepared to get off with my bag on my back. Surprisingly, the auntie next to me had already noticed and politely put away her phone, standing up to make way for me. I was almost anxious, standing up and heading towards the back door.</p><p>Outside the door were the ubiquitous plane trees of Yangpu, the scorching air blurring my vision, like every summer in Shanghai. I felt a bit unfamiliar with it all.</p><p>As I got off the bus, I turned around to watch the bus slowly depart, feeling as if I had never gotten off the bus.</p><p>Perhaps, this is Shanghai - it embraces warmth, but also has its quirks and faux niceties. Getting on and getting off the bus, no single thing encompasses the entirety of Shanghai, but each thing is Shanghai.</p>]]></content><author><name>DotIN13</name></author><category term="en-us"/><category term="Life"/><category term="Short Story"/><summary type="html"><![CDATA[On Wednesday, I took the bus to work as usual. The bus was a bit crowded. I didn’t think much of it - most of the people around me were familiar faces, mostly office workers and students, with others being aunties and uncles out for grocery shopping, to the market, or to the hospital. The Beginning of Autumn has passed, yet Shanghai’s weather remained hot. The morning sun only used a fraction of its strength, already making the road sizzle and causing the distant high-rise buildings to distort from the heat. Inside the bus, there was the unique humid heat of Shanghai, with the hot air outside seeping in through the windows, mixed with the dampness and sweat inside the bus, making it hard to catch one’s breath. Aunties “Driver, why isn’t the air conditioning on?” Almost as expected, the auntie beside me couldn’t hold back and her voice pierced through the others, reaching the driver. “Huh?” “The air conditioning!” “It’s on! If you open the windows, it won’t work well, just close the window, open window air conditioning can’t function properly!” “Why is it cooler outside the window than inside? What’s the deal?” Other aunties and uncles started to comment, trying to put pressure on the driver. “You see it yourself, the air conditioning is definitely on, why open the windows if it’s better to keep them closed?” The auntie was clearly not satisfied with the driver’s explanation, but with people in between, she couldn’t confront him directly in the driver’s seat. She muttered a few words and took out her phone to watch short videos. As the journey continued and we passed several older neighborhoods, more elderly people got on the bus. Occasionally, when I saw someone less mobile, I would think of giving up my seat. However, I was sitting closer to the inner part. Glancing at the auntie beside me, who still had a furrowed brow, I didn’t want to ask her to stand up. I just put that thought aside and buried my head in my phone. Luckily, most of them got off after a few stops, so I could endure standing for a while. Elderly Man Lost in my thoughts, I suddenly heard a loud noise from the front of the bus - it turned out that the driver was asking the elderly man standing near the back door to take a seat! After about half a minute, I realized what was happening. Standing at the back was an elderly man, in his sixties or seventies, with rumpled brown corduroy jacket and black coarse linen pants that had turned white in several places, with a pair of hollow eyes behind his wire-framed glasses. Before I could get up, another young man had already offered his seat to him. The driver seemed somewhat pleased, “Well done, young man!” But the elderly man seemed indifferent, as if he had not heard anything, still holding onto the railing tightly, his eyes empty and unwavering as he gazed outside the window. “He’s talking to you,” the auntie beside me said loudly in Shanghainese Mandarin, almost venting the frustration she had due to the air conditioning issue onto the elderly man, “the driver asked this young man to let you have a seat!” The elderly man raised his eyebrows, seemingly understanding now. I also understood - he couldn’t understand the Shanghainese Mandarin spoken by the driver. He shuffled towards the seat that the young man had just vacated and muttered, “So he wanted me to sit, I thought he was suspecting me.” His voice, although quiet, sounded exceptionally loud in the metal box of the bus. Initially, people were whispering, but now it seemed like an eruption of energy, and everyone burst into laughter. The Shanghai auntie beside me persisted, “The driver is being nice to you, asking this young man to give up his seat for you, and you thought he was suspecting you!” “Thank you,” whether out of embarrassment or just going with the flow, the elderly man managed to squeeze out these three words from his mouth, with only two teeth remaining. The auntie seemed to notice the elderly man’s reluctance and loudly responded, “Yes, this is Shanghai!” The driver, through the auntie’s translation, asked the elderly man where he needed to get off and instructed him to walk slowly when he reached his stop. Satisfied, the driver returned to his seat and continued driving forward. The auntie’s words sent chills down my spine. In over twenty years, no one had been so resolute saying such words, let alone making me truly feel that Shanghai is such a tolerant, loving, and inclusive city. Yes, the auntie was right, this is Shanghai! Lady As I was savoring the beauty of Shanghai, there was suddenly a commotion from the driver’s end - it turned out a young lady was unsure whether to board the bus, standing at the door looking at her map, delaying the driver from closing the door and moving. The lady repeatedly apologized, but the usually understanding driver didn’t take it lightly, criticizing her sternly. “How can I drive with you standing here?” “You want the whole bus to wait for you?” “Can’t you check the map before?” The lady didn’t dare to utter a word, swiped her card in a hurry, and went to stand at the back. However, the driver continued to chase after her, his questioning mixed with the heat of the moment. The auntie beside me couldn’t sit still, as if she was the bus attendant, “Driver, start driving, can’t you be more patient with her? You’re responsible for everyone on this bus, focus on driving!” As the auntie spoke, everyone else on the bus started to murmur, contributing their opinions in hushed tones. With everyone’s effort, the bus finally started moving. Are We There Yet? As the bus inched closer to the final stop, I felt somewhat dazed, as if I had just gotten on the bus. As we approached the stop, I prepared to get off with my bag on my back. Surprisingly, the auntie next to me had already noticed and politely put away her phone, standing up to make way for me. I was almost anxious, standing up and heading towards the back door. Outside the door were the ubiquitous plane trees of Yangpu, the scorching air blurring my vision, like every summer in Shanghai. I felt a bit unfamiliar with it all. As I got off the bus, I turned around to watch the bus slowly depart, feeling as if I had never gotten off the bus. Perhaps, this is Shanghai - it embraces warmth, but also has its quirks and faux niceties. Getting on and getting off the bus, no single thing encompasses the entirety of Shanghai, but each thing is Shanghai.]]></summary></entry><entry><title type="html">这篇文章来得不太晚：Caddy反向代理ChatGPT</title><link href="https://www.wannaexpresso.com/2023/05/21/reverse-proxy-chatgpt-with-caddy/" rel="alternate" type="text/html" title="这篇文章来得不太晚：Caddy反向代理ChatGPT"/><published>2023-05-21T00:00:00+08:00</published><updated>2023-05-21T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2023/05/21/reverse-proxy-chatgpt-with-caddy</id><content type="html" xml:base="https://www.wannaexpresso.com/2023/05/21/reverse-proxy-chatgpt-with-caddy/"><![CDATA[<p>有的时候，大雨来了忘记带伞，于是之后每天都带着伞出门，直到下次大雨的前一天。</p><p>加州淘金错过了，比特币错过了，大语言模型仿佛又要错过。不过也无妨，错过的只是这个平行宇宙。</p><h2 id="反向代理chatgpt晚哉">反向代理ChatGPT，晚哉？</h2><p>不晚呼——心里默念，总有人会用得上罢。</p><h2 id="使用caddy代理chatgpt">使用Caddy代理ChatGPT</h2><p>网页端由于Cloudflare管的比较严，估摸着反向代理有封号风险，于是转而反向代理OpenAI API。</p><p>代理起来相当容易，首先<a href="https://www.wannaexpresso.com/2020/04/21/aria-pi/#%E4%BE%9D%E7%85%A7%E5%AE%98%E6%96%B9%E6%8C%87%E5%8D%97%E5%AE%89%E8%A3%85caddy" target="_blank" rel="nofollow noopener noreferrer">安装Caddy 2</a>。</p><p>编写<code class="language-plaintext highlighter-rouge">/etc/Caddyfile</code>：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;host&gt;:&lt;port&gt; <span class="o">{</span>
  reverse_proxy https://api.openai.com <span class="o">{</span>
    header_up Host api.openai.com
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div><p>其中，<code class="language-plaintext highlighter-rouge">&lt;host&gt;</code>应当替换为代理服务器的IP或者域名，<code class="language-plaintext highlighter-rouge">&lt;port&gt;</code>应当替换为监听的端口。</p><p>值得注意的是，OpenAI API的Cloudflare防御机制会检测请求中的Host值，以判断请求是否确实发向OpenAI。如果不为<code class="language-plaintext highlighter-rouge">api.openai.com</code>，将返回403 Forbidden错误。</p><p>因此必须设置<code class="language-plaintext highlighter-rouge">header_up Host api.openai.com</code>，将请求头中的Host修改为对应值。</p><p>运行<code class="language-plaintext highlighter-rouge">sudo systemctl start caddy</code>开启Caddy服务器，可以使用<code class="language-plaintext highlighter-rouge">curl</code>测试代理服务器是否工作正常：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>curl curl https://&lt;host&gt;:&lt;port&gt;/v1/models
<span class="o">{</span>
  <span class="s2">"error"</span>: <span class="o">{</span>
    <span class="s2">"message"</span>: <span class="s2">"You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accesing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys."</span>,
    <span class="s2">"type"</span>: <span class="s2">"invalid_request_error"</span>,
    <span class="s2">"param"</span>: null,
    <span class="s2">"code"</span>: null
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div><p>返回值提示需要提供API Key，表示已经配置成功。</p><p>如果说<a href="https://www.wannaexpresso.com/2020/04/26/wod-reverse-proxy/" target="_blank" rel="nofollow noopener noreferrer">反向代理WOD</a>的难度是13，反向代理ChatGPT的难度只能勉强打4分！</p><h2 id="多嘴">多嘴</h2><p>如果喜欢使用JSON配置Caddy，也可以参考以下配置：</p><div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"admin"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"disabled"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"logging"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"logs"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="nl">"log0"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"writer"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"output"</span><span class="p">:</span><span class="w"> </span><span class="s2">"stdout"</span><span class="w">
        </span><span class="p">},</span><span class="w">
        </span><span class="nl">"encoder"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"format"</span><span class="p">:</span><span class="w"> </span><span class="s2">"console"</span><span class="w">
        </span><span class="p">},</span><span class="w">
        </span><span class="nl">"level"</span><span class="p">:</span><span class="w"> </span><span class="s2">"WARN"</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"apps"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"http"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="nl">"servers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"srv0"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"listen"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">":&lt;port&gt;"</span><span class="p">],</span><span class="w">
          </span><span class="nl">"routes"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
            </span><span class="p">{</span><span class="w">
              </span><span class="nl">"match"</span><span class="p">:</span><span class="w"> </span><span class="p">[{</span><span class="w"> </span><span class="nl">"host"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"&lt;host&gt;"</span><span class="p">]</span><span class="w"> </span><span class="p">}],</span><span class="w">
              </span><span class="nl">"handle"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                </span><span class="p">{</span><span class="w">
                  </span><span class="nl">"handler"</span><span class="p">:</span><span class="w"> </span><span class="s2">"subroute"</span><span class="p">,</span><span class="w">
                  </span><span class="nl">"routes"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                    </span><span class="p">{</span><span class="w">
                      </span><span class="nl">"handle"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                        </span><span class="p">{</span><span class="w">
                          </span><span class="nl">"handler"</span><span class="p">:</span><span class="w"> </span><span class="s2">"reverse_proxy"</span><span class="p">,</span><span class="w">
                          </span><span class="nl">"headers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                            </span><span class="nl">"request"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                              </span><span class="nl">"set"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                                </span><span class="nl">"Host"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"api.openai.com"</span><span class="p">]</span><span class="w">
                              </span><span class="p">}</span><span class="w">
                            </span><span class="p">}</span><span class="w">
                          </span><span class="p">},</span><span class="w">
                          </span><span class="nl">"transport"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                            </span><span class="nl">"protocol"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http"</span><span class="p">,</span><span class="w">
                            </span><span class="nl">"tls"</span><span class="p">:</span><span class="w"> </span><span class="p">{}</span><span class="w">
                          </span><span class="p">},</span><span class="w">
                          </span><span class="nl">"upstreams"</span><span class="p">:</span><span class="w"> </span><span class="p">[{</span><span class="w"> </span><span class="nl">"dial"</span><span class="p">:</span><span class="w"> </span><span class="s2">"api.openai.com:443"</span><span class="w"> </span><span class="p">}]</span><span class="w">
                        </span><span class="p">}</span><span class="w">
                      </span><span class="p">]</span><span class="w">
                    </span><span class="p">}</span><span class="w">
                  </span><span class="p">]</span><span class="w">
                </span><span class="p">}</span><span class="w">
              </span><span class="p">],</span><span class="w">
              </span><span class="nl">"terminal"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
            </span><span class="p">}</span><span class="w">
          </span><span class="p">]</span><span class="w">
        </span><span class="p">}</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div><h2 id="再多嘴几句">再多嘴几句</h2><p>回到开头的问题，究竟为什么（请原谅，我总是一个喜欢问为什么的人）会错过？</p><p>似乎有两个主要的方面，一个是开始的动力，一个是持续的毅力。就好比前段时间大火的核聚变试验，点火成功，能量转化大于能量输入，这注定是一次不可磨灭的成功——一个好的开始是成功的一半。但同时，聚变最为困难的就是持续控制等离子体，保持高温高压环境，确保聚变稳定发生（请原谅，我是一个不太会举例子的人）。</p><p>人做一件事也没有太大的差别，要做好一件事，首先需要一个自己确信的由头，再加上一些破釜成舟的劲头，这已经是极难的了。例如要用上ChatGPT，要买上手机号，与俄罗斯电话号商斗智斗勇，又要准备代理链接，和国内网络代理商斗志斗勇，又要逃避OpenAI的监管，和对岸资本主义斗智斗勇，最后到头来打开了界面，还要头疼得问些什么、怎么问。仿佛是回到新石器时代，重新学怎么使用石锤、石锄，怎样炼铁……</p><p>说复杂，倒也不复杂，费劲心思用上了ChatGPT，总能做些什么吧？那倒也说不准。打开问答界面：没有提问的欲望。打开工作文档：找不到提问点。打开VSCode：不知道API能做什么。</p><p>空，空空的。</p><p>虽说开始做一件事不容易，但它也就是那么一瞬的事。而坚持，是一个周、一个月，是十年的冷板凳。况且，坚持不是一句口号，坚持的途中是不间断的复杂思维与发明创造——没有惊喜的日子谁都过不下去，再要是真的没有惊喜，那就只能自己动手创造。</p><p>翻来覆去说了那么多，也就无非那么两句话：万事开头难，修行靠自身。不少成功者恐怕都是这样走来的。</p>]]></content><author><name>DotIN13</name></author><category term="ChatGPT"/><category term="OpenAI"/><category term="Caddy"/><summary type="html"><![CDATA[有的时候，大雨来了忘记带伞，于是之后每天都带着伞出门，直到下次大雨的前一天。]]></summary></entry><entry><title type="html">This Article Is Not Too Late: Caddy Reverse Proxy for ChatGPT</title><link href="https://www.wannaexpresso.com/en-us/2023/05/21/reverse-proxy-chatgpt-with-caddy/" rel="alternate" type="text/html" title="This Article Is Not Too Late: Caddy Reverse Proxy for ChatGPT"/><published>2023-05-21T00:00:00+08:00</published><updated>2023-05-21T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/en-us/2023/05/21/reverse-proxy-chatgpt-with-caddy</id><content type="html" xml:base="https://www.wannaexpresso.com/en-us/2023/05/21/reverse-proxy-chatgpt-with-caddy/"><![CDATA[<p>Sometimes, forgetting to bring an umbrella when it rains heavily leads to carrying one every day until the day before the next heavy rain.</p><p>Missed out on the California Gold Rush, missed out on Bitcoin, and it seems like we are about to miss out on the large language models again. But it’s okay, what we miss is just this parallel universe.</p><h2 id="reverse-proxy-for-chatgpt-is-it-too-late">Reverse Proxy for ChatGPT, Is It Too Late?</h2><p>Not too late, silently thinking in your heart, someone will definitely find it useful.</p><h2 id="proxying-chatgpt-with-caddy">Proxying ChatGPT with Caddy</h2><p>Due to strict control by Cloudflare on the web end, there may be a risk of being blocked when reverse proxying. So, we turned to reverse proxying the OpenAI API.</p><p>It is quite easy to set up the proxy. First, <a href="https://www.wannaexpresso.com/2020/04/21/aria-pi/#%E4%BE%9D%E7%85%A7%E5%AE%98%E6%96%B9%E6%8C%87%E5%8D%97%E5%AE%89%E8%A3%85caddy" target="_blank" rel="nofollow noopener noreferrer">install Caddy 2</a>.</p><p>Write <code class="language-plaintext highlighter-rouge">/etc/Caddyfile</code>:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;host&gt;:&lt;port&gt; <span class="o">{</span>
  reverse_proxy https://api.openai.com <span class="o">{</span>
    header_up Host api.openai.com
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div><p>Where <code class="language-plaintext highlighter-rouge">&lt;host&gt;</code> should be replaced with the IP or domain of the proxy server, and <code class="language-plaintext highlighter-rouge">&lt;port&gt;</code> should be replaced with the listening port.</p><p>It is worth noting that OpenAI API’s Cloudflare defense mechanism checks the Host value in the request to determine if the request is indeed sent to OpenAI. If it is not <code class="language-plaintext highlighter-rouge">api.openai.com</code>, a 403 Forbidden error will be returned.</p><p>Therefore, you must set <code class="language-plaintext highlighter-rouge">header_up Host api.openai.com</code> to modify the Host in the request header to the corresponding value.</p><p>Run <code class="language-plaintext highlighter-rouge">sudo systemctl start caddy</code> to start the Caddy server. You can test if the proxy server is working properly using <code class="language-plaintext highlighter-rouge">curl</code>:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>curl curl https://&lt;host&gt;:&lt;port&gt;/v1/models
<span class="o">{</span>
  <span class="s2">"error"</span>: <span class="o">{</span>
    <span class="s2">"message"</span>: <span class="s2">"You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys."</span>,
    <span class="s2">"type"</span>: <span class="s2">"invalid_request_error"</span>,
    <span class="s2">"param"</span>: null,
    <span class="s2">"code"</span>: null
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div><p>The returned message indicates that an API Key is required, indicating successful configuration.</p><p>If reverse proxying WOD had a difficulty level of 13, reverse proxying ChatGPT would only score a mere 4!</p><h2 id="just-a-thought">Just a Thought</h2><p>If you prefer using JSON configuration for Caddy, you can also refer to the following configuration:</p><div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"admin"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"disabled"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"logging"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"logs"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="nl">"log0"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"writer"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"output"</span><span class="p">:</span><span class="w"> </span><span class="s2">"stdout"</span><span class="w">
        </span><span class="p">},</span><span class="w">
        </span><span class="nl">"encoder"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"format"</span><span class="p">:</span><span class="w"> </span><span class="s2">"console"</span><span class="w">
        </span><span class="p">},</span><span class="w">
        </span><span class="nl">"level"</span><span class="p">:</span><span class="w"> </span><span class="s2">"WARN"</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"apps"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"http"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="nl">"servers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"srv0"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"listen"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">":&lt;port&gt;"</span><span class="p">],</span><span class="w">
          </span><span class="nl">"routes"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
            </span><span class="p">{</span><span class="w">
              </span><span class="nl">"match"</span><span class="p">:</span><span class="w"> </span><span class="p">[{</span><span class="w"> </span><span class="nl">"host"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"&lt;host&gt;"</span><span class="p">]</span><span class="w"> </span><span class="p">}],</span><span class="w">
              </span><span class="nl">"handle"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                </span><span class="p">{</span><span class="w">
                  </span><span class="nl">"handler"</span><span class="p">:</span><span class="w"> </span><span class="s2">"subroute"</span><span class="p">,</span><span class="w">
                  </span><span class="nl">"routes"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                    </span><span class="p">{</span><span class="w">
                      </span><span class="nl">"handle"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
                        </span><span class="p">{</span><span class="w">
                          </span><span class="nl">"handler"</span><span class="p">:</span><span class="w"> </span><span class="s2">"reverse_proxy"</span><span class="p">,</span><span class="w">
                          </span><span class="nl">"headers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                            </span><span class="nl">"request"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                              </span><span class="nl">"set"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                                </span><span class="nl">"Host"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"api.openai.com"</span><span class="p">]</span><span class="w">
                              </span><span class="p">}</span><span class="w">
                            </span><span class="p">}</span><span class="w">
                          </span><span class="p">},</span><span class="w">
                          </span><span class="nl">"transport"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
                            </span><span class="nl">"protocol"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http"</span><span class="p">,</span><span class="w">
                            </span><span class="nl">"tls"</span><span class="p">:</span><span class="w"> </span><span class="p">{}</span><span class="w">
                          </span><span class="p">},</span><span class="w">
                          </span><span class="nl">"upstreams"</span><span class="p">:</span><span class="w"> </span><span class="p">[{</span><span class="w"> </span><span class="nl">"dial"</span><span class="p">:</span><span class="w"> </span><span class="s2">"api.openai.com:443"</span><span class="w"> </span><span class="p">}]</span><span class="w">
                        </span><span class="p">}</span><span class="w">
                      </span><span class="p">]</span><span class="w">
                    </span><span class="p">}</span><span class="w">
                  </span><span class="p">]</span><span class="w">
                </span><span class="p">}</span><span class="w">
              </span><span class="p">],</span><span class="w">
              </span><span class="nl">"terminal"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
            </span><span class="p">}</span><span class="w">
          </span><span class="p">]</span><span class="w">
        </span><span class="p">}</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div><h2 id="one-more-thing">One More Thing</h2><p>Going back to the initial question, why did we miss out (please forgive me, I am someone who always likes to ask why)?</p><p>It seems there are two main aspects: the initial drive and the perseverance. Just like the recent nuclear fusion experiment that successfully ignited and converted energy greater than the energy input, it is destined to be an indelible success—a good start is half the battle. However, the most difficult part of fusion is to sustainably control plasma, maintain a high-temperature and high-pressure environment, and ensure stable fusion occurs (please forgive me, I am not good at giving examples).</p><p>Doing something does not have much of a difference. To do something well, you first need a conviction from within yourself, coupled with some determined efforts, which is already very challenging. For example, to use ChatGPT, you need to purchase a mobile number, outwit the Russian telephone number provider, prepare proxy links, contend with domestic network proxy providers, evade OpenAI’s regulation, and compete intellectually with capitalist enemies across the sea. Finally, when you open the interface, you still need to figure out what to ask and how to ask. It’s like going back to the Neolithic era, relearning how to use a stone hammer, stone hoe, how to smelt iron…</p><p>It may sound complex, but it’s not that complicated. After putting in the effort to use ChatGPT, you can always accomplish something, right? Well, that’s uncertain. Open the Q&amp;A interface: no desire to ask questions. Open the work document: can’t find a starting point. Open VSCode: not sure what the API can do.</p><p>Empty, utterly empty.</p><p>Although starting something is not easy, it’s just a moment of effort. However, persistence is a week, a month, or a decade of waiting. Moreover, persistence is not just a slogan; it involves continuous complex reasoning and creativity—no one can survive without surprises every day. And if there are really no surprises, you have to create them yourself.</p><p>After all the talking, it boils down to these two sentences: the beginning is always difficult, but to succeed is not just beginning. Many successful individuals have probably walked down this path.</p>]]></content><author><name>DotIN13</name></author><category term="en-us"/><category term="ChatGPT"/><category term="OpenAI"/><category term="Caddy"/><summary type="html"><![CDATA[Sometimes, forgetting to bring an umbrella when it rains heavily leads to carrying one every day until the day before the next heavy rain. Missed out on the California Gold Rush, missed out on Bitcoin, and it seems like we are about to miss out on the large language models again. But it’s okay, what we miss is just this parallel universe. Reverse Proxy for ChatGPT, Is It Too Late? Not too late, silently thinking in your heart, someone will definitely find it useful. Proxying ChatGPT with Caddy Due to strict control by Cloudflare on the web end, there may be a risk of being blocked when reverse proxying. So, we turned to reverse proxying the OpenAI API. It is quite easy to set up the proxy. First, install Caddy 2. Write /etc/Caddyfile: &lt;host&gt;:&lt;port&gt; { reverse_proxy https://api.openai.com { header_up Host api.openai.com } } Where &lt;host&gt; should be replaced with the IP or domain of the proxy server, and &lt;port&gt; should be replaced with the listening port. It is worth noting that OpenAI API’s Cloudflare defense mechanism checks the Host value in the request to determine if the request is indeed sent to OpenAI. If it is not api.openai.com, a 403 Forbidden error will be returned. Therefore, you must set header_up Host api.openai.com to modify the Host in the request header to the corresponding value. Run sudo systemctl start caddy to start the Caddy server. You can test if the proxy server is working properly using curl: $ curl curl https://&lt;host&gt;:&lt;port&gt;/v1/models { "error": { "message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.", "type": "invalid_request_error", "param": null, "code": null } } The returned message indicates that an API Key is required, indicating successful configuration. If reverse proxying WOD had a difficulty level of 13, reverse proxying ChatGPT would only score a mere 4! Just a Thought If you prefer using JSON configuration for Caddy, you can also refer to the following configuration: { "admin": { "disabled": true }, "logging": { "logs": { "log0": { "writer": { "output": "stdout" }, "encoder": { "format": "console" }, "level": "WARN" } } }, "apps": { "http": { "servers": { "srv0": { "listen": [":&lt;port&gt;"], "routes": [ { "match": [{ "host": ["&lt;host&gt;"] }], "handle": [ { "handler": "subroute", "routes": [ { "handle": [ { "handler": "reverse_proxy", "headers": { "request": { "set": { "Host": ["api.openai.com"] } } }, "transport": { "protocol": "http", "tls": {} }, "upstreams": [{ "dial": "api.openai.com:443" }] } ] } ] } ], "terminal": true } ] } } } } } One More Thing Going back to the initial question, why did we miss out (please forgive me, I am someone who always likes to ask why)? It seems there are two main aspects: the initial drive and the perseverance. Just like the recent nuclear fusion experiment that successfully ignited and converted energy greater than the energy input, it is destined to be an indelible success—a good start is half the battle. However, the most difficult part of fusion is to sustainably control plasma, maintain a high-temperature and high-pressure environment, and ensure stable fusion occurs (please forgive me, I am not good at giving examples). Doing something does not have much of a difference. To do something well, you first need a conviction from within yourself, coupled with some determined efforts, which is already very challenging. For example, to use ChatGPT, you need to purchase a mobile number, outwit the Russian telephone number provider, prepare proxy links, contend with domestic network proxy providers, evade OpenAI’s regulation, and compete intellectually with capitalist enemies across the sea. Finally, when you open the interface, you still need to figure out what to ask and how to ask. It’s like going back to the Neolithic era, relearning how to use a stone hammer, stone hoe, how to smelt iron… It may sound complex, but it’s not that complicated. After putting in the effort to use ChatGPT, you can always accomplish something, right? Well, that’s uncertain. Open the Q&amp;A interface: no desire to ask questions. Open the work document: can’t find a starting point. Open VSCode: not sure what the API can do. Empty, utterly empty. Although starting something is not easy, it’s just a moment of effort. However, persistence is a week, a month, or a decade of waiting. Moreover, persistence is not just a slogan; it involves continuous complex reasoning and creativity—no one can survive without surprises every day. And if there are really no surprises, you have to create them yourself. After all the talking, it boils down to these two sentences: the beginning is always difficult, but to succeed is not just beginning. Many successful individuals have probably walked down this path.]]></summary></entry><entry><title type="html">在Manjaro上优雅地使用QSV加速Jellyfin转码（自动挡）</title><link href="https://www.wannaexpresso.com/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro/" rel="alternate" type="text/html" title="在Manjaro上优雅地使用QSV加速Jellyfin转码（自动挡）"/><published>2023-05-03T00:00:00+08:00</published><updated>2023-05-03T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro</id><content type="html" xml:base="https://www.wannaexpresso.com/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro/"><![CDATA[<p>FFmpeg啊，真的是天天都想搞个大新闻——你们看，这才一年没到，已经刷了俩大版本了，把版本号直接干到了6.0。</p><p>那咋办，打不过就加入呗，咱们<code class="language-plaintext highlighter-rouge">Jellyfin x Manjaro</code>系列也刷个版本号。</p><p><a href="/2022/01/24/jellyfin-quick-sync-qsv-transcode/">Jellyfin x Manjaro系列第三回</a>只讨论了使用QSV中出现的部份问题；而<a href="/2022/02/05/how-to-enable-qsv-in-ffmpeg-manual/">让FFmpeg用上QSV编码器（手动挡）</a>所介绍的安装方法实在曲折繁琐，只适用于我这样的“五菱高手”——自动挡才是大趋势，手动党难成大业！</p><p>说白了，就是缺一篇完整实现QSV加速、使用FFmpeg 6.0、方便快捷干净卫生的教程呗！</p><h2 id="只要这样再这样再那样">只要这样，再这样，再那样…</h2><p>开个玩笑，其实，在Manjaro上使用QSV非常容易，因为你需要的、你想要的、你不要的软件包，都有大神半仙提前准备好了。</p><blockquote><p>友情提示，本篇教程只适用于支持<code class="language-plaintext highlighter-rouge">intel-media-driver</code>的Intel显卡，具体型号列表见<a href="https://github.com/intel/media-driver#supported-platforms" target="_blank" rel="nofollow noopener noreferrer">Intel Media Driver GitHub仓库</a>。</p></blockquote><h2 id="第一步安装intel显卡驱动">第一步：安装Intel显卡驱动</h2><p>Intel显卡驱动包括驱动程序<code class="language-plaintext highlighter-rouge">intel-media-driver</code>和前端API<code class="language-plaintext highlighter-rouge">intel-media-sdk</code>或<code class="language-plaintext highlighter-rouge">onevpl</code>。其中，较新的<code class="language-plaintext highlighter-rouge">OneVPL</code>仅支持11代及以后的核显/独显。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># 11代及以上</span>
<span class="nb">sudo </span>pacman <span class="nt">-S</span> intel-media-driver onevpl-intel-gpu

<span class="c"># 其余型号</span>
<span class="nb">sudo </span>pacman <span class="nt">-S</span> intel-media-driver intel-media-sdk
</code></pre></div></div><p>安装完成后，编辑<code class="language-plaintext highlighter-rouge">/etc/profile.d/libva.sh</code>，添加下面两行，告诉系统使用最新的iHD显卡驱动（即<code class="language-plaintext highlighter-rouge">intel-media-driver</code>），而不是已经过时的i965驱动，重启系统使配置生效：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">LIBVA_DRIVERS_PATH</span><span class="o">=</span>/usr/lib/dri
<span class="nv">LIBVA_DRIVER_NAME</span><span class="o">=</span>iHD
</code></pre></div></div><p>随后安装<code class="language-plaintext highlighter-rouge">libva-utils</code>查看驱动识别情况。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>pacman <span class="nt">-S</span> libva-utils
</code></pre></div></div><p>运行<code class="language-plaintext highlighter-rouge">vainfo</code>命令，如果出现类似下述的输出，则表示驱动已经安装成功。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>vainfo
Trying display: wayland
Trying display: x11
error: can<span class="s1">'t connect to X server!
Trying display: drm
vainfo: VA-API version: 1.18 (libva 2.17.1)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.5.2 (ccc137c92)
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointFEI
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointFEI
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointFEI
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointFEI
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
</span></code></pre></div></div><blockquote><p>如果你的电脑有多张显卡，那么直接运行<code class="language-plaintext highlighter-rouge">vainfo</code>很可能会报错。此时不妨试试<code class="language-plaintext highlighter-rouge">vainfo --display drm --device /dev/dri/renderD12x</code>，将<code class="language-plaintext highlighter-rouge">/dev/dri/renderD12x</code>替换为正确的显卡文件路径。只要有任意一张显卡支持iHD驱动即可，FFmpeg通常会自动识别并使用其中支持QSV的显卡。</p></blockquote><h2 id="第二步安装intel-opencl后端">第二步：安装Intel OpenCL后端</h2><p>Intel显卡驱动的OpenCL后端目前由<a href="https://github.com/intel/compute-runtime" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">intel-compute-runtime</code></a>提供，用于将HDR视频转换为SDR播放，Manjaro官方源的版本较老，因此我们使用AUR源安装。</p><p>AUR软件源是一个软件包共享平台，用户可以自行提交发布软件包与安装脚本供其他用户使用。使用AUR软件源一般需要首先安装yay包管理工具。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>pacman <span class="nt">-S</span> <span class="nt">--needed</span> git base-devel yay
</code></pre></div></div><p>随后使用yay安装<code class="language-plaintext highlighter-rouge">intel-compute-runtime</code>。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yay intel-compute-runtime
</code></pre></div></div><p>在yay展示的各个选项中选择编译好的<a href="https://aur.archlinux.org/packages/intel-compute-runtime-bin" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">intel-compute-runtime-bin</code></a>即可。安装完成后，可以使用<code class="language-plaintext highlighter-rouge">clinfo</code>命令查看是否安装成功。</p><h2 id="第三步也是最后一步安装jellyfin与jellyfin-ffmpeg">第三步（也是最后一步）：安装Jellyfin与Jellyfin FFmpeg</h2><p>最新发布的Jellyfin 10.8.10修复了两个重要安全漏洞，并且推荐与jellyfin-ffmpeg6组合使用。</p><p>AUR已有编译好的<code class="language-plaintext highlighter-rouge">jellyfin-bin</code>软件包供下载，也有<a href="https://github.com/nyanmisaka" target="_blank" rel="nofollow noopener noreferrer">nyanmisaka</a>上传的最新版<code class="language-plaintext highlighter-rouge">jellyfin-ffmpeg6</code>。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yay jellyfin-bin jellyfin-ffmpeg6
</code></pre></div></div><p>最后，使用<code class="language-plaintext highlighter-rouge">systemd</code>启动jellyfin，打开<code class="language-plaintext highlighter-rouge">http://localhost:8096</code>即可食用。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># 立刻启动，并配置开机自启</span>
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="nt">--now</span> jellyfin
</code></pre></div></div><p>在Jellyfin网页界面中进入<code class="language-plaintext highlighter-rouge">Dashboard -&gt; Playback</code>，将硬件加速（Hardware Acceleration）设置为<code class="language-plaintext highlighter-rouge">Intel Quick Sync (QSV)</code>。</p><p>参照下图勾选转码相应功能。</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/playback-settings-320-70ec877e6.avif 320w, /assets/public/images/in-post/post-jellyfin/playback-settings-640-8dc009184.avif 640w, /assets/public/images/in-post/post-jellyfin/playback-settings-960-2e63bf89b.avif 960w, /assets/public/images/in-post/post-jellyfin/playback-settings-1600-2fd1fbd27.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/playback-settings-320-c0baa848a.webp 320w, /assets/public/images/in-post/post-jellyfin/playback-settings-640-bb32ea6bd.webp 640w, /assets/public/images/in-post/post-jellyfin/playback-settings-960-4dcdc343b.webp 960w, /assets/public/images/in-post/post-jellyfin/playback-settings-1600-4dd22ff75.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-jellyfin/playback-settings-1600-4dd22ff75.webp" width="2204" height="1324"></picture><em>Jellyfin Hardware Acceleration Settings</em></div><p><code class="language-plaintext highlighter-rouge">Enable hardware decoding for</code>：对以下视频格式开启硬件解码。应根据<a href="https://github.com/intel/media-driver#decodingencoding-features" target="_blank" rel="nofollow noopener noreferrer">显卡实际支持情况</a>进行选择。</p><p><code class="language-plaintext highlighter-rouge">Prefer OS native DXVA or VA-API hardware decoders</code>：解码时使用DXVA或VA-API硬件解码，而不使用QSV加速。使用QSV解码出错时可以勾选。</p><p><code class="language-plaintext highlighter-rouge">Enable hardware encoding</code>：开启硬件解码。需要勾选。</p><p><code class="language-plaintext highlighter-rouge">Enable Intel Low-Power H.264 hardware encoder</code>与<code class="language-plaintext highlighter-rouge">Enable Intel Low-Power HEVC hardware encoder</code>：开启低功耗H.264/HEVC硬件编码器。9代以上的CPU可以尝试勾选这两个选项，以加速HDR转SDR播放。在12代核显上不需要额外进行配置，其他型号请看<a href="https://jellyfin.org/docs/general/administration/hardware-acceleration/intel#low-power-encoding" target="_blank" rel="nofollow noopener noreferrer">Jellyfin官方文档</a>。</p><p><code class="language-plaintext highlighter-rouge">Allow encoding in HEVC format</code>：允许使用HEVC格式编码视频。如果你用来观看视频的设备支持HEVC编码，则建议勾选。</p><p>参照下图勾选HDR色调映射（Tone Mapping）相关功能，用于HDR视频转SDR播放。</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/tone-mapping-settings-320-0ad266d4d.avif 320w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-640-3cd9abd5d.avif 640w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-960-03479607e.avif 960w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-1600-5a52e65cf.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/tone-mapping-settings-320-77a872f82.webp 320w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-640-fb3046bae.webp 640w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-960-51d7fdab8.webp 960w, /assets/public/images/in-post/post-jellyfin/tone-mapping-settings-1600-20d6b4838.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-jellyfin/tone-mapping-settings-1600-20d6b4838.webp" width="1716" height="1052" loading="lazy"></picture><em>Jellyfin Tone Mapping Settings</em></div><p><code class="language-plaintext highlighter-rouge">Enable VPP Tone mapping</code>：VPP色调映射。效率比OpenCL更高，但仅支持HDR10，兼容性较差，不建议勾选。</p><p><code class="language-plaintext highlighter-rouge">Enable Tone mapping</code>：OpenCL色调映射。建议勾选。</p><p>到此配置完成。</p><h2 id="简单的是不是最好的">简单的是不是最好的</h2><p>大环境总是去繁就简的。我小学的时候，家长接送孩子学的都还是手动挡。十年以后的今天，一眼望去，手动挡已经一车难觅。你问我手动挡和自动挡能做的事情有什么不同？我会说，差不离。但自动挡好上手，容易学，让更多的人能够在很短的时间里学会开车，成为自己的旅途的主人。</p><p>对于操作系统而言，同样如此——那些开着“自动挡”的操作系统在吸引用户方面具有天然的优势。但Linux不是轿车也不是巴士，而是载人航天——一个永远离不开“手动挡”的地方。<a href="https://www.oschina.net/news/237615/manjaro-is-losing-user" target="_blank" rel="nofollow noopener noreferrer">Manjaro Linux正在迅速流失用户</a>这个问题是一个悖论——Manjaro不是Steam OS，作为Linux发行版，它的目标不可能，也不应该是服务大多数人。它更像是一个带教员、掌门人，提供便捷的包管理系统，帮助对Linux真正感兴趣的人了解这个操作系统，并基于此了解计算机的工作原理。用户数量究竟多少并不重要，甚至用户的减少意味着有更多的用户已经“出师”，开始使用更加底层的Arch Linux，或者开始使用更加稳定的Linux发行版进行生产工作，甚至可能已经融会贯通，学会了在一些“自动挡”操作系统上实现各种“手动超控”。</p><p>或许，现在的我们离不开Manjaro，只是因为我们还是书生。</p><p>不如珍惜当下的简单，因为不知何时总要告别。</p>]]></content><author><name>DotIN13</name></author><category term="Jellyfin"/><category term="Manjaro"/><category term="Intel QSV"/><summary type="html"><![CDATA[FFmpeg啊，真的是天天都想搞个大新闻——你们看，这才一年没到，已经刷了俩大版本了，把版本号直接干到了6.0。]]></summary></entry><entry><title type="html">How to Elegantly (And Automatically) Enable QSV for Jellyfin on Manjaro</title><link href="https://www.wannaexpresso.com/en-us/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro/" rel="alternate" type="text/html" title="How to Elegantly (And Automatically) Enable QSV for Jellyfin on Manjaro"/><published>2023-05-03T00:00:00+08:00</published><updated>2023-05-03T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/en-us/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro</id><content type="html" xml:base="https://www.wannaexpresso.com/en-us/2023/05/03/how-to-elegantly-enable-qsv-for-jellyfin-on-manjaro/"><![CDATA[<p>FFmpeg, always making headlines every day - look, it’s only been a year, and they’ve already bumped the version number up to 6.0.</p><p>So what to do? If you can’t beat them, join them! Let’s also boost the version number for the <code class="language-plaintext highlighter-rouge">Jellyfin x Manjaro</code> series.</p><p>The <a href="/2022/01/24/jellyfin-quick-sync-qsv-transcode/">third installment</a> of the Jellyfin x Manjaro series only discussed some issues with using QSV, while the installation method highlighted in <a href="/2022/02/05/how-to-enable-qsv-in-ffmpeg-manual/">manually enabling QSV in FFmpeg</a> was quite complex and convoluted. It’s only suitable for “manual-mode pros” like me - automatic mode is the real trend, and manual transmission enthusiasts may find it challenging to succeed!</p><p>To put it simply, we need a complete guide that implements QSV acceleration efficiently, utilizes FFmpeg 6.0, and is convenient, quick, and clean!</p><h2 id="just-like-this-then-that-and-another-one">Just like this, then that, and another one…</h2><p>Just kidding! Actually, using QSV on Manjaro is very easy because the software packages you need, want, and don’t want have already been prepared by the experts.</p><blockquote><p>Friendly reminder, this tutorial is only applicable to Intel GPUs that support <code class="language-plaintext highlighter-rouge">intel-media-driver</code>, specific model lists can be found on the <a href="https://github.com/intel/media-driver#supported-platforms" target="_blank" rel="nofollow noopener noreferrer">Intel Media Driver GitHub repository</a>.</p></blockquote><h2 id="step-one-installing-intel-gpu-drivers">Step One: Installing Intel GPU Drivers</h2><p>Intel GPU drivers include the driver <code class="language-plaintext highlighter-rouge">intel-media-driver</code> and the front-end APIs <code class="language-plaintext highlighter-rouge">intel-media-sdk</code> or <code class="language-plaintext highlighter-rouge">onevpl</code>. The newer <code class="language-plaintext highlighter-rouge">OneVPL</code> only supports 11th generation and newer Intel GPUs.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># For 11th generation and above</span>
<span class="nb">sudo </span>pacman <span class="nt">-S</span> intel-media-driver onevpl-intel-gpu

<span class="c"># For other models</span>
<span class="nb">sudo </span>pacman <span class="nt">-S</span> intel-media-driver intel-media-sdk
</code></pre></div></div><p>After installation, edit <code class="language-plaintext highlighter-rouge">/etc/profile.d/libva.sh</code> and add the following two lines to instruct the system to use the latest iHD GPU driver (<code class="language-plaintext highlighter-rouge">intel-media-driver</code>) instead of the outdated i965 driver. Restart the system to apply the configuration:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">LIBVA_DRIVERS_PATH</span><span class="o">=</span>/usr/lib/dri
<span class="nv">LIBVA_DRIVER_NAME</span><span class="o">=</span>iHD
</code></pre></div></div><p>Then install <code class="language-plaintext highlighter-rouge">libva-utils</code> to check the driver recognition.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>pacman <span class="nt">-S</span> libva-utils
</code></pre></div></div><p>Run the <code class="language-plaintext highlighter-rouge">vainfo</code> command. If you see output similar to the following, it means the driver has been successfully installed.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>vainfo
Trying display: wayland
Trying display: x11
error: can<span class="s1">'t connect to X server!
Trying display: drm
vainfo: VA-API version: 1.18 (libva 2.17.1)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.5.2 (ccc137c92)
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      ...
</span></code></pre></div></div><blockquote><p>If your computer has multiple GPUs, running <code class="language-plaintext highlighter-rouge">vainfo</code> directly may result in an error. In this case, try <code class="language-plaintext highlighter-rouge">vainfo --display drm --device /dev/dri/renderD12x</code>, replacing <code class="language-plaintext highlighter-rouge">/dev/dri/renderD12x</code> with the correct GPU file path. As long as one GPU supports the iHD driver, FFmpeg will usually automatically detect and use a GPU that supports QSV.</p></blockquote><h2 id="step-two-installing-the-intel-opencl-backend">Step Two: Installing the Intel OpenCL Backend</h2><p>The OpenCL backend for Intel GPU drivers is currently provided by <a href="https://github.com/intel/compute-runtime" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">intel-compute-runtime</code></a> to convert HDR videos to SDR for playback. Since the version in the Manjaro official repository is outdated, we will install it from the AUR repository.</p><p>The AUR repository is a software package sharing platform where users can submit and publish software packages and installation scripts for others to use. To use the AUR repository, you generally need to install the <code class="language-plaintext highlighter-rouge">yay</code> package manager tool.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>pacman <span class="nt">-S</span> <span class="nt">--needed</span> git base-devel yay
</code></pre></div></div><p>Then use <code class="language-plaintext highlighter-rouge">yay</code> to install <code class="language-plaintext highlighter-rouge">intel-compute-runtime</code>.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yay intel-compute-runtime
</code></pre></div></div><p>Select the pre-compiled <a href="https://aur.archlinux.org/packages/intel-compute-runtime-bin" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">intel-compute-runtime-bin</code></a> from the options presented by <code class="language-plaintext highlighter-rouge">yay</code>. After installation, you can use the <code class="language-plaintext highlighter-rouge">clinfo</code> command to check if it was installed successfully.</p><h2 id="step-three-final-step-installing-jellyfin-and-jellyfin-ffmpeg">Step Three (Final Step): Installing Jellyfin and Jellyfin FFmpeg</h2><p>The recently released Jellyfin 10.8.10 addresses two critical security vulnerabilities and recommends using it with <code class="language-plaintext highlighter-rouge">jellyfin-ffmpeg6</code>.</p><p>There is a pre-compiled <code class="language-plaintext highlighter-rouge">jellyfin-bin</code> package available in the AUR for download, as well as the latest version of <code class="language-plaintext highlighter-rouge">jellyfin-ffmpeg6</code> uploaded by <a href="https://github.com/nyanmisaka" target="_blank" rel="nofollow noopener noreferrer">nyanmisaka</a>.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yay jellyfin-bin jellyfin-ffmpeg6
</code></pre></div></div><p>Finally, start jellyfin using <code class="language-plaintext highlighter-rouge">systemd</code> and open <code class="language-plaintext highlighter-rouge">http://localhost:8096</code> to begin streaming.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Start immediately and configure to start on boot</span>
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="nt">--now</span> jellyfin
</code></pre></div></div><p>In the Jellyfin web interface, go to <code class="language-plaintext highlighter-rouge">Dashboard -&gt; Playback</code>, and set <code class="language-plaintext highlighter-rouge">Hardware Acceleration</code> to <code class="language-plaintext highlighter-rouge">Intel Quick Sync (QSV)</code>.</p><p>Refer to the images to select the appropriate transcoding functions.</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/playback-settings-320-70ec877e6.avif 320w, /assets/public/images/in-post/post-jellyfin/playback-settings-640-8dc009184.avif 640w, /assets/public/images/in-post/post-jellyfin/playback-settings-960-2e63bf89b.avif 960w, /assets/public/images/in-post/post-jellyfin/playback-settings-1600-2fd1fbd27.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-jellyfin/playback-settings-320-c0baa848a.webp 320w, /assets/public/images/in-post/post-jellyfin/playback-settings-640-bb32ea6bd.webp 640w, /assets/public/images/in-post/post-jellyfin/playback-settings-960-4dcdc343b.webp 960w, /assets/public/images/in-post/post-jellyfin/playback-settings-1600-4dd22ff75.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-jellyfin/playback-settings-1600-4dd22ff75.webp" width="2204" height="1324"></picture><em>Jellyfin Hardware Acceleration Settings</em></div><p>By completing the configuration, Jellyfin should be all set for your needs.</p>]]></content><author><name>DotIN13</name></author><category term="en-us"/><category term="Jellyfin"/><category term="Manjaro"/><category term="Intel QSV"/><summary type="html"><![CDATA[FFmpeg, always making headlines every day - look, it’s only been a year, and they’ve already bumped the version number up to 6.0. So what to do? If you can’t beat them, join them! Let’s also boost the version number for the Jellyfin x Manjaro series. The third installment of the Jellyfin x Manjaro series only discussed some issues with using QSV, while the installation method highlighted in manually enabling QSV in FFmpeg was quite complex and convoluted. It’s only suitable for “manual-mode pros” like me - automatic mode is the real trend, and manual transmission enthusiasts may find it challenging to succeed! To put it simply, we need a complete guide that implements QSV acceleration efficiently, utilizes FFmpeg 6.0, and is convenient, quick, and clean! Just like this, then that, and another one… Just kidding! Actually, using QSV on Manjaro is very easy because the software packages you need, want, and don’t want have already been prepared by the experts. Friendly reminder, this tutorial is only applicable to Intel GPUs that support intel-media-driver, specific model lists can be found on the Intel Media Driver GitHub repository. Step One: Installing Intel GPU Drivers Intel GPU drivers include the driver intel-media-driver and the front-end APIs intel-media-sdk or onevpl. The newer OneVPL only supports 11th generation and newer Intel GPUs. # For 11th generation and above sudo pacman -S intel-media-driver onevpl-intel-gpu # For other models sudo pacman -S intel-media-driver intel-media-sdk After installation, edit /etc/profile.d/libva.sh and add the following two lines to instruct the system to use the latest iHD GPU driver (intel-media-driver) instead of the outdated i965 driver. Restart the system to apply the configuration: LIBVA_DRIVERS_PATH=/usr/lib/dri LIBVA_DRIVER_NAME=iHD Then install libva-utils to check the driver recognition. sudo pacman -S libva-utils Run the vainfo command. If you see output similar to the following, it means the driver has been successfully installed. $ vainfo Trying display: wayland Trying display: x11 error: can't connect to X server! Trying display: drm vainfo: VA-API version: 1.18 (libva 2.17.1) vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.5.2 (ccc137c92) vainfo: Supported profile and entrypoints VAProfileNone : VAEntrypointVideoProc VAProfileNone : VAEntrypointStats ... If your computer has multiple GPUs, running vainfo directly may result in an error. In this case, try vainfo --display drm --device /dev/dri/renderD12x, replacing /dev/dri/renderD12x with the correct GPU file path. As long as one GPU supports the iHD driver, FFmpeg will usually automatically detect and use a GPU that supports QSV. Step Two: Installing the Intel OpenCL Backend The OpenCL backend for Intel GPU drivers is currently provided by intel-compute-runtime to convert HDR videos to SDR for playback. Since the version in the Manjaro official repository is outdated, we will install it from the AUR repository. The AUR repository is a software package sharing platform where users can submit and publish software packages and installation scripts for others to use. To use the AUR repository, you generally need to install the yay package manager tool. sudo pacman -S --needed git base-devel yay Then use yay to install intel-compute-runtime. yay intel-compute-runtime Select the pre-compiled intel-compute-runtime-bin from the options presented by yay. After installation, you can use the clinfo command to check if it was installed successfully. Step Three (Final Step): Installing Jellyfin and Jellyfin FFmpeg The recently released Jellyfin 10.8.10 addresses two critical security vulnerabilities and recommends using it with jellyfin-ffmpeg6. There is a pre-compiled jellyfin-bin package available in the AUR for download, as well as the latest version of jellyfin-ffmpeg6 uploaded by nyanmisaka. yay jellyfin-bin jellyfin-ffmpeg6 Finally, start jellyfin using systemd and open http://localhost:8096 to begin streaming. # Start immediately and configure to start on boot sudo systemctl enable --now jellyfin In the Jellyfin web interface, go to Dashboard -&gt; Playback, and set Hardware Acceleration to Intel Quick Sync (QSV). Refer to the images to select the appropriate transcoding functions. Jellyfin Hardware Acceleration Settings By completing the configuration, Jellyfin should be all set for your needs.]]></summary></entry><entry><title type="html">The monopoly of LLMs: Challenges imposed by ChatGPT and OpenAI</title><link href="https://www.wannaexpresso.com/2023/04/20/llm-and-thoughts/" rel="alternate" type="text/html" title="The monopoly of LLMs: Challenges imposed by ChatGPT and OpenAI"/><published>2023-04-20T00:00:00+08:00</published><updated>2023-04-20T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2023/04/20/llm-and-thoughts</id><content type="html" xml:base="https://www.wannaexpresso.com/2023/04/20/llm-and-thoughts/"><![CDATA[<p>LLMs have been creeping into our lives relentlessly since the emergence and then the prevalence of the flagship model ChatGPT over the entire human Internet. However, while everyone is really just trying out, or even starting to rely on LLMs to make an easier and more enjoyable life, it should be noted that LLMs are still, essentially, a game of capital and mode of production in the name of the pursuit of knowledge and truth.</p><h2 id="the-battle-of-model-size">The battle of model size</h2><p>It is almost obvious that extra large language models would perform in most of the tasks better than language models in smaller form factors, such as ChatGLM, Alpaca, Llama, etc., and especially when it comes to logically intense question and answers. Under such assumptions, there is simply no reason for individuals to deploy and use smaller-scale LLMs that would almost definitively perform worse than the flagship models that was made readily and freely available online, such as ChatGPT.</p><p>This is not only the alarm bell but the pending death sentence to all the medium-sized models whose sole purpose is to take on flagship models such as ChatGPT - no potential users whatsoever.</p><p>Moreover, the future of such medium-sized models are constrained by the limiting terms of service of often their most important data source, that is, ChatGPT. As most of the well-performing medium-sized models rely heavily on data generated by services affiliated with OpenAI, such as Alpaca, Vicuna and Koala. And not to our surprise, even Google Bard was allegedly <a href="https://www.theverge.com/2023/3/29/23662621/google-bard-chatgpt-sharegpt-training-denies" target="_blank" rel="nofollow noopener noreferrer">distilling ChatGPT data to improve their performance</a>.</p><div class="post-img__container post-img wide" data-about-target="wide"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/bard-320-0139bf624.avif 320w, /assets/public/images/in-post/post-llm/bard-640-9991cd7c3.avif 640w, /assets/public/images/in-post/post-llm/bard-960-2121ef482.avif 960w, /assets/public/images/in-post/post-llm/bard-1115-2dee5d878.avif 1115w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/bard-320-ea5c4c79b.webp 320w, /assets/public/images/in-post/post-llm/bard-640-99b19bc41.webp 640w, /assets/public/images/in-post/post-llm/bard-960-ab7f912c7.webp 960w, /assets/public/images/in-post/post-llm/bard-1115-bb60070b7.webp 1115w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-llm/bard-1115-bb60070b7.webp" width="1115" height="582"></picture><em>Google denies Bard was trained with ChatGPT data</em></div><p>The deadliest blow comes in the absence of a comprehensive method to evaluate how good the LLMs are actually performing, leaving small-scale LLMs no way to prove their worth. The current evaluation efforts are <a href="https://vicuna.lmsys.org/" target="_blank" rel="nofollow noopener noreferrer">limited to scores given by GPT-4 (as in Vicuna)</a>, <a href="https://bair.berkeley.edu/blog/2023/04/03/koala/#preliminary-evaluation" target="_blank" rel="nofollow noopener noreferrer">human evaluation over a limited set of questions (as in Koala)</a>, and the evaluation of the models against traditional NLP test sets. None of these methods are reliable or convincing enough to be considered golden rules to pick the best model. As such, the amount of exposure a model can get decides its rise and fall, leaving small-scale LLMs no chance against established brands starring OpenAI and Microsoft.</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/chart-320-93fb9d9b9.avif 320w, /assets/public/images/in-post/post-llm/chart-599-bf69f87e3.avif 599w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/chart-320-e5b317a34.webp 320w, /assets/public/images/in-post/post-llm/chart-599-de87c1355.webp 599w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-llm/chart-599-de87c1355.webp" width="599" height="256" loading="lazy"></picture><em>Open Source Vicuna Claims 90% Quality of OpenAI ChatGPT</em></div><p>The development of LLMs is helplessly sliding into the slope of monopoly. Credible competitors will be scarce. Eventually smaller scale LLMs are going to be only relavent in select use cases such as offline deployment in private firms, or mobile deployment in laptops and phones, where the full-scale LLMs are not available, or where the horsepower of a state-of-the-art LLM is simply not necessary.</p><h2 id="the-dominance-of-capital">The dominance of capital</h2><p>OpenAI CEO Sam Altman said in his interview on 13rd, April that the size of LLMs won’t matter as much moving forward. The remarks can be roughly perceived in two different ways.</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/altman-320-c34a9900d.avif 320w, /assets/public/images/in-post/post-llm/altman-640-bba269915.avif 640w, /assets/public/images/in-post/post-llm/altman-960-f95398ce9.avif 960w, /assets/public/images/in-post/post-llm/altman-1024-0782c825a.avif 1024w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-llm/altman-320-ad99f2dac.webp 320w, /assets/public/images/in-post/post-llm/altman-640-0a30e1d16.webp 640w, /assets/public/images/in-post/post-llm/altman-960-bd2b3d54f.webp 960w, /assets/public/images/in-post/post-llm/altman-1024-546462c26.webp 1024w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-llm/altman-1024-546462c26.webp" width="1024" height="683" loading="lazy"></picture><em>Sam Altman: Size of LLMs won't matter as much moving forward / GettyImages</em></div><p>First, if what he said was based on facts discovered in experiments conducted inside of OpenAI, which is the only possible entity on the earth that has the capability to conduct such research, OpenAI will still be the one and only firm with the capital and access to fine-tune the largest and most performant models in the world, which they own. And that automatically implies unfair competitions to come.</p><p>Second, if the signal was just a camouflage, and increasing the size of the model does still give performance boosts. Then it is essentially telling the other competitors to back down from the war of model size and instead research on fine-tuning, which OpenAI can easily snatch and copy whenever the research is published, while they can still work on increasing the quantity and quality of their training data.</p><p>In either case, OpenAI wins and the monopoly prevails.</p><h2 id="implications-and-thoughts">Implications and Thoughts</h2><p>According to <a href="https://keensight.ai/about-us" target="_blank" rel="nofollow noopener noreferrer">an anonymous souce (Xiaoyi Ma)</a>, the gap of artificial intelligence technologies between states would only expand in the years to come. And it will be partly because of the fact that the Big NLPs control the biggest data, the biggest infrastructure, and biggest capital, which would only attract for them even more investments and human resources.</p><p>What is even more alarming is that the development of open-source medium-sized models are very likely going to be suppressed by the success of the flagship models, given the lack of proper evaluation methods and adequate public exposure, which would ultimately reflect in the slow replication and assimilation of such technologies among the other states.</p><p>The introduction of accelerated training frameworks like <a href="https://github.com/microsoft/DeepSpeed" target="_blank" rel="nofollow noopener noreferrer">DeepSpeed</a> might mitigate the uncrossable gap between medium-sized and large-sized models. However, the lack of open data and the fact that the highest quality data comes from ChatGPT still make me wonder if the monopoly can ever be lifted.</p>]]></content><author><name>DotIN13</name></author><category term="LLM"/><category term="NLP"/><category term="ChatGPT"/><summary type="html"><![CDATA[LLMs have been creeping into our lives relentlessly since the emergence and then the prevalence of the flagship model ChatGPT over the entire human Internet. However, while everyone is really just trying out, or even starting to rely on LLMs to make an easier and more enjoyable life, it should be noted that LLMs are still, essentially, a game of capital and mode of production in the name of the pursuit of knowledge and truth.]]></summary></entry><entry><title type="html">Ununtu虚拟机，但KVM+RDP</title><link href="https://www.wannaexpresso.com/2023/03/06/kvm-test-run/" rel="alternate" type="text/html" title="Ununtu虚拟机，但KVM+RDP"/><published>2023-03-06T00:00:00+08:00</published><updated>2023-03-06T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2023/03/06/kvm-test-run</id><content type="html" xml:base="https://www.wannaexpresso.com/2023/03/06/kvm-test-run/"><![CDATA[<h2 id="只因不够用">只因不够用</h2><p>办公室的服务器刚好够用。</p><p>但大约有45.67%的原因是有一个野蛮之人强取豪夺，妄图再霸占一台；也有大约62.72%的原因是因为有一个胆小之人装聋作哑，不愿牺牲自己的机子。于是，在这个108.39%概率发生的世界线上，多了一个甘愿贡献自己的只因，甘愿吃亏的好心人。</p><p>只因终究还是太少。</p><h2 id="无中生有">无中生有</h2><p>胆小鬼虽然也有些小气，但听说好心人没有机用，心里不是滋味，却也不好凭空变一台出来。</p><p>“既然不能无中生有，那就只能试试‘螺蛳壳里做道场’‘宰相肚里能撑船’了。”</p><p>胆小鬼决定，用KVM把一台机子拆两部，一部自己用，一部留给好心人。就算野蛮人再横，也没法再找理由来抢了。</p><h2 id="使用kvm运行ubuntu-server">使用KVM运行Ubuntu Server</h2><h3 id="安装libvirt">安装libvirt</h3><p>依据胆小鬼的理解，KVM只是一种虚拟机的虚拟化方式，而真正模拟出硬件来供虚拟机使用的依旧是QEMU，真正管理虚拟机的依旧是Libvirt命令。也因此只好先安装QEMU与libvirt。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">--no-install-recommends</span> qemu-system libvirt-clients libvirt-daemon-system qemu-utils
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--no-install-recommends</code>：不安装推荐程序包。如果不需要图形化管理工具，可选择此选项。</p><p>胆小鬼也怕事，想让自己的非root用户也能管理虚拟机，于是运行以下命令：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>adduser &lt;youruser&gt; libvirt
</code></pre></div></div><p>这还不够，如果直接运行虚拟机管理命令<a href="https://packages.debian.org/virtinst" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">virsh</code></a>，管理的是当前用户名下的虚拟机，如果要管理root名下的虚拟机，还需要作以下调整。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virsh <span class="nt">--connect</span> qemu:///system list <span class="nt">--all</span>
</code></pre></div></div><p>这样一来，每次一用命令就得输入一遍<code class="language-plaintext highlighter-rouge">--connect qemu:///system</code>，那还得了？还好可以导入环境变量，让<code class="language-plaintext highlighter-rouge">virsh</code>一心一意管理系统的虚拟机。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># 将以下环境变量声明放进~/.bashrc或者~/.zshrc中</span>
<span class="nb">export </span><span class="nv">LIBVIRT_DEFAULT_URI</span><span class="o">=</span><span class="s1">'qemu:///system'</span>
</code></pre></div></div><blockquote><p>参考<a href="https://wiki.debian.org/KVM" target="_blank" rel="nofollow noopener noreferrer">Debian Wiki/KVM</a>。</p></blockquote><h3 id="配置虚拟网络">配置虚拟网络</h3><p>办公室不仅只因不够用，网线也不够。胆小鬼的机器只插着一根网线，得想办法让那一根网线同时供主机和客机上网才行。而且，还得让客机也能分配到内网IP地址，不然还得给好心人做端口映射，那多麻烦事儿！</p><p>libvirt的网络配置大致分为三种：</p><ol><li>桥接网络（bridged network）。主机与所有客机共享一个网络接口，处于同一网段，各自有各自的内网IP，可以直接从外部访问。</li><li>NAT网络（NAT-based network）。主机与所有客机共享一个网络接口，处于不同网段，由主机作为所有客机的DHCP服务器。libvirt默认的网络模式就是NAT模式。</li><li>路由网络（routed network）。主机与所有客机共享一个网络接口，处于同一网段，各自有各自的内网IP，但外部网络并不知晓内部的网络情况，需要在外部的路由器配置静态路由来允许外部外部设备直接访问虚拟机。</li></ol><p>胆小鬼不出所料，选了最简单的桥接网络。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>ip <span class="nb">link </span>add br0 <span class="nb">type </span>bridge <span class="c"># 添加一个名为br0的网桥</span>
<span class="nb">sudo </span>ip <span class="nb">link set</span> &lt;device&gt; up <span class="c"># 启用一个网络设备，如网口enp0s2</span>
<span class="nb">sudo </span>ip <span class="nb">link set</span> &lt;device&gt; master br0 <span class="c"># 将设备添加到网桥</span>
<span class="nb">sudo </span>ip address add dev br0 192.168.1.142/24 <span class="c"># 将主机网桥的IP设置为192.168.1.142</span>
</code></pre></div></div><p>如此，桥接已经配置好，但重启就会失效，要叫他保持下去，得用<code class="language-plaintext highlighter-rouge">bridge-utils</code>软件包。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>bridge-utils
</code></pre></div></div><p>随后，配置网络界面。例如原先使用的网口为enp0s2，那么就将原有的<code class="language-plaintext highlighter-rouge">iface enp0s2 inet dhcp</code>一行替换为如下内容：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># 将网络接口enp0s2设置为手动配置，以防与NetworkManager产生冲突</span>
iface enp0s2 inet manual

<span class="c"># 配置网桥br0</span>
auto br0
iface br0 inet static
    bridge_ports enp0s2
        address 192.168.1.142
        broadcast 192.168.1.255
        netmask 255.255.255.0
        gateway 192.168.0.1
</code></pre></div></div><p>使用systemd重启network服务，网络配置就生效了。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl restart network
</code></pre></div></div><p>接下来，要让libvirt的虚拟机使用网桥br0，还需要对网桥进行声明。</p><p>首先创建一个<code class="language-plaintext highlighter-rouge">br0-bridge.xml</code>文件，内容如下：</p><div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">&lt;network&gt;</span>
    <span class="nt">&lt;name&gt;</span>br0-bridge<span class="nt">&lt;/name&gt;</span>
    <span class="nt">&lt;forward</span> <span class="na">mode=</span><span class="s">"bridge"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;bridge</span> <span class="na">name=</span><span class="s">"br0"</span> <span class="nt">/&gt;</span>
<span class="nt">&lt;/network&gt;</span>
</code></pre></div></div><p>然后运行<code class="language-plaintext highlighter-rouge">virsh</code>命令导入声明配置。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virsh net-define br0-bridge.xml
</code></pre></div></div><p>使用<code class="language-plaintext highlighter-rouge">virsh net-list --all</code>就可以看到现有的全部网络。</p><blockquote><p>参考<a href="https://linuxconfig.org/how-to-use-bridged-networking-with-libvirt-and-kvm" target="_blank" rel="nofollow noopener noreferrer">Bridged Networking with libvirt</a>。</p></blockquote><h3 id="安装ubuntu-server">安装Ubuntu Server</h3><p>虽说胆小怕事，但胆小鬼也是出了名的心细。好心人一直以来用的都是正黄旗的Ubuntu Desktop，gnome桌面环境。这次是为了好心人有机用才硬着头皮捣鼓KVM，可不得把只因做成他爱用的样子？</p><p>不巧，胆小鬼自己的只因原本运行着没有图形界面的Debian，而Ubuntu Desktop安装时又恰好需要图形界面，无奈之下，只好退而求其次，安装Ubuntu Server，图形界面另外解决。</p><p>首先下载好系统镜像，然后安装<code class="language-plaintext highlighter-rouge">libosinfo-bin</code>软件包，帮助<code class="language-plaintext highlighter-rouge">virt-install</code>命令识别系统版本：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>libosinfo-bin
</code></pre></div></div><p>可以运行<code class="language-plaintext highlighter-rouge">osinfo-query os</code>命令来查看<code class="language-plaintext highlighter-rouge">virt-install</code>支持的系统版本。由于<a href="https://askubuntu.com/questions/1070500/why-doesnt-osinfo-query-os-detect-ubuntu-18-04" target="_blank" rel="nofollow noopener noreferrer">Ubuntu 22.04不在<code class="language-plaintext highlighter-rouge">libosinfo-bin</code>软件包自带的列表中</a>，还需要从<a href="releases.pagure.org">libosinfo</a>托管网站手动下载更新osinfo的数据库。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget <span class="nt">-O</span> <span class="s2">"/tmp/osinfo-db.tar.xz"</span> <span class="s2">"https://releases.pagure.org/libosinfo/osinfo-db-20221130.tar.xz"</span>
osinfo-db-import <span class="nt">--user</span> <span class="s2">"/tmp/osinfo-db.tar.xz"</span>
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--user</code>：数据库导入位置。如果选择<code class="language-plaintext highlighter-rouge">--user</code>，数据库将会存储在<code class="language-plaintext highlighter-rouge">~/.config/osinfo</code>，如果选择<code class="language-plaintext highlighter-rouge">--local</code>，数据库将存储在<code class="language-plaintext highlighter-rouge">/etc/osinfo</code>，如果选择<code class="language-plaintext highlighter-rouge">--system</code>，数据库将存储在<code class="language-plaintext highlighter-rouge">/usr/share/osinfo</code>。</p><p>编辑虚拟机安装命令：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virt-install <span class="nt">--virt-type</span> kvm <span class="nt">--name</span> &lt;domain-name&gt; <span class="se">\</span>
  <span class="nt">--location</span> &lt;path/to/ubuntu-22.04.iso&gt;,kernel<span class="o">=</span>casper/vmlinuz,initrd<span class="o">=</span>casper/initrd <span class="se">\</span>
  <span class="nt">--os-variant</span> ubuntu22.04 <span class="se">\</span>
  <span class="nt">--vcpu</span> 10,maxvcpus<span class="o">=</span>20 <span class="nt">--cpu</span> host <span class="se">\</span>
  <span class="nt">--disk</span> <span class="nv">size</span><span class="o">=</span>120 <span class="nt">--memory</span> 4096 <span class="se">\</span>
  <span class="nt">--network</span> br0-network <span class="se">\</span>
  <span class="nt">--graphics</span> none <span class="se">\</span>
  <span class="nt">--console</span> pty,target_type<span class="o">=</span>serial <span class="se">\</span>
  <span class="nt">--extra-args</span> <span class="s2">"console=ttyS0"</span>
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--name</code>：虚拟机的名字，也称作域名（domain name）。</p><p><code class="language-plaintext highlighter-rouge">--location</code>：系统镜像位置。可以是网络位置，如<code class="language-plaintext highlighter-rouge">https://cn.archive.ubuntu.com/ubuntu/dists/jammy/main/installer-amd64/</code>，也可以是本地路径；<code class="language-plaintext highlighter-rouge">--cdrom</code>参数同样可以指定系统镜像，但只支持本地路径。在没有图形界面的环境中，必须使用<code class="language-plaintext highlighter-rouge">--extra-args</code>的自定义内核参数开启串行控制台（Serial console）来安装系统，而<code class="language-plaintext highlighter-rouge">--cdrom</code>又恰好不支持自定义内核参数，因此只能使用<code class="language-plaintext highlighter-rouge">--location</code>参数。由于<a href="https://askubuntu.com/questions/789358/virt-install-using-location-with-iso-image-no-longer-working" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">--location</code>选项不能自动识别镜像中的内核位置</a>，所以需要手动指定<code class="language-plaintext highlighter-rouge">kernel=casper/vmlinuz,initrd=casper/initrd</code>。</p><p><code class="language-plaintext highlighter-rouge">--os-variant</code>：操作系统类型。可以运行<code class="language-plaintext highlighter-rouge">osinfo-query os</code>命令来查看支持的版本。</p><p><code class="language-plaintext highlighter-rouge">--vcpus</code>：初始CPU线程数。KVM虚拟机中的每一个虚拟线程绑定一个真实线程，所以设置超过真实线程数的虚拟CPU数量是没有意义的。但一个真实线程可以被同时分配到多个虚拟机中，通过调度器完成多个虚拟机指派的工作，这也是CPU可以超售（CPU oversell）的由来。</p><p><code class="language-plaintext highlighter-rouge">--cpu</code>：CPU配置。可以配置CPU的型号与特性。当型号设置为<code class="language-plaintext highlighter-rouge">host</code>时，虚拟机将拥有主机CPU的所有特性，但也可能会导致无法在线迁移（live migration）。</p><p><code class="language-plaintext highlighter-rouge">--disk opt1=val1,opt2=val2,...</code>：虚拟机存储设备。可以通过<code class="language-plaintext highlighter-rouge">size</code>选项设置大小，也可以通过<code class="language-plaintext highlighter-rouge">path</code>选项设置路径。</p><p><code class="language-plaintext highlighter-rouge">--memory</code>：内存大小。</p><p><code class="language-plaintext highlighter-rouge">--network</code>：选择网络。选择刚才创建的桥接网络br0-network。</p><p><code class="language-plaintext highlighter-rouge">--graphics</code>、<code class="language-plaintext highlighter-rouge">--console</code>、<code class="language-plaintext highlighter-rouge">--extra-args</code>：为虚拟机配置一个串行控制台，用于在没有图形界面的情况下从主机直接操作虚拟机。</p><p>运行上述命令，进入安装界面，选择基本模式（basic mode），就可以通过文本控制台安装系统了。</p><blockquote><p>参见<a href="https://linux.die.net/man/1/virt-install" target="_blank" rel="nofollow noopener noreferrer">virt-install(1)</a>。</p></blockquote><h3 id="安装vnc">安装VNC</h3><p>原本好心人可以直接坐到自己的服务器面前，泡杯水，就着温吞的开水不紧不慢地点亮屏幕，配置环境或是运行代码。</p><p>但由于胆小鬼没有安装桌面环境，好心人也就没法打开虚拟机的图形界面。胆小鬼虽说怕事，但心里一衡量，如果把远程桌面装好，那还比原来坐到服务器面前操作更方便，收益大于成本。于是便咬咬牙开始帮好心人配置。</p><p>首先VNC需要一个桌面环境——别忘了刚才安装的是Ubuntu Server！胆小鬼为好心人量身选择了他爱用的gnome。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>gnome-session gdm3 <span class="c"># 安装gnome桌面环境与窗口管理器gdm3</span>
<span class="nb">sudo </span>apt <span class="nb">install </span>ubuntu-desktop <span class="c"># 安装桌面环境必须的各个软件包</span>
<span class="nb">sudo </span>systemctl set-default multi-user.target <span class="c"># 不要默认启动图形环境</span>
</code></pre></div></div><p>VNC服务端选用了TigerVNC。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>tigervnc-standalone-server dbus-x11
</code></pre></div></div><p>配置<code class="language-plaintext highlighter-rouge">~/.vnc/xstartup</code>：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh</span>

<span class="o">[</span> <span class="nt">-x</span> /etc/vnc/xstartup <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">exec</span> /etc/vnc/xstartup
<span class="o">[</span> <span class="nt">-r</span> <span class="nv">$HOME</span>/.Xresources <span class="o">]</span> <span class="o">&amp;&amp;</span> xrdb <span class="nv">$HOME</span>/.Xresources
vncconfig <span class="nt">-iconic</span> &amp;
<span class="nb">export </span><span class="nv">DESKTOP_SESSION</span><span class="o">=</span>/usr/share/xsessions/ubuntu.desktop
<span class="nb">export </span><span class="nv">XDG_CURRENT_DESKTOP</span><span class="o">=</span>ubuntu:GNOME
<span class="nb">export </span><span class="nv">GNOME_SHELL_SESSION_MODE</span><span class="o">=</span>ubuntu
<span class="nb">export </span><span class="nv">XDG_DATA_DIRS</span><span class="o">=</span>/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
dbus-launch <span class="nt">--exit-with-session</span> /usr/bin/gnome-session <span class="nt">--systemd</span> <span class="nt">--session</span><span class="o">=</span>ubuntu
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--systemd</code>：如果gnome-shell版本低于3.40，则需要省去该参数。</p><p>配置<code class="language-plaintext highlighter-rouge">~/.vnc/config</code>：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">session</span><span class="o">=</span>ubuntu
<span class="nv">geometry</span><span class="o">=</span>1920x1080
localhost
alwaysshared
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">alwaysshared</code>：所有客户端都会连接到同一个会话。</p><p>配置<code class="language-plaintext highlighter-rouge">/etc/systemd/system/vncserver@.service</code>：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>Unit]
<span class="nv">Description</span><span class="o">=</span>Start TigerVNC server at startup
<span class="nv">After</span><span class="o">=</span>syslog.target network.target

<span class="o">[</span>Service]
<span class="nv">Type</span><span class="o">=</span>forking
<span class="nv">User</span><span class="o">=</span>&lt;youruser&gt;
<span class="nv">Group</span><span class="o">=</span>&lt;youruser&gt;
<span class="nv">WorkingDirectory</span><span class="o">=</span>/home/&lt;youruser&gt;
<span class="nv">PIDFile</span><span class="o">=</span>/home/&lt;youruser&gt;/.vnc/%H:%i.pid
<span class="nv">ExecStartPre</span><span class="o">=</span>-/bin/sh <span class="nt">-c</span> <span class="s2">"/usr/bin/vncserver -kill :%i &gt; /dev/null 2&gt;&amp;1"</span>
<span class="nv">ExecStart</span><span class="o">=</span>/usr/bin/vncserver <span class="nt">-depth</span> 24 <span class="nt">-geometry</span> 1920x1080 <span class="nt">-localhost</span> :%i
<span class="nv">ExecStop</span><span class="o">=</span>/usr/bin/vncserver <span class="nt">-kill</span> :%i
<span class="nv">Restart</span><span class="o">=</span>on-success
<span class="nv">RestartSec</span><span class="o">=</span>10

<span class="o">[</span>Install]
<span class="nv">WantedBy</span><span class="o">=</span>multi-user.target
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">-localhost</code>：仅允许从本机访问VNC。如果需要远程访问，则需要配合SSH安全隧道进行转发。</p><p><code class="language-plaintext highlighter-rouge">Restart</code>、<code class="language-plaintext highlighter-rouge">RestartSec</code>：<a href="https://unix.stackexchange.com/questions/43398/is-it-possible-to-keep-a-vnc-server-alive-after-log-out" target="_blank" rel="nofollow noopener noreferrer">当客户端进行注销操作时，服务端就会自行退出</a>，如果希望服务端继续运行，则需要添加上述重启参数。</p><p>最后，通过systemd开启VNC服务端：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="nt">--now</span> vncserver@1
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">@1</code>：会话编号。会话编号设为1，则端口号为5901；编号为2，端口号为5902，以此类推。</p><p>最后，建立SSH安全隧道，连接VNC：</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh &lt;youruser&gt;@&lt;serverip&gt; <span class="nt">-L</span> 9901:localhost:5901
</code></pre></div></div><p>用VNC客户端连接<code class="language-plaintext highlighter-rouge">localhost:5901</code>，画面似乎有些模糊，将客户端的质量设为“高”，一切都很美好。</p><h2 id="只是因为在人群中多看了你一眼">只是因为在人群中多看了你一眼</h2><p>到这里，胆小鬼已经花了整整一天的时间，就为了给好心人装配一台KVM虚拟机（而且是带远程访问的那种）。如果你要问他为什么要做这些，他或许会回答你，“只是因为在人群中多看了你一眼，我就知道你是好心人。对好心人，自然不能太差劲。”</p><p>胆小鬼鼓足勇气，想告诉好心人自己为他准备了一个非常好用的虚拟机。但看见好心人在专心忙活别的事情，胆小鬼走到他边上，欲言又止，转身又走了开去，好像什么也没发生过。</p>]]></content><author><name>DotIN13</name></author><category term="Linux"/><category term="Ubuntu"/><category term="KVM"/><category term="RDP"/><summary type="html"><![CDATA[只因不够用]]></summary></entry><entry><title type="html">Ubuntu Virtual Machine, but with KVM+RDP</title><link href="https://www.wannaexpresso.com/en-us/2023/03/06/kvm-test-run/" rel="alternate" type="text/html" title="Ubuntu Virtual Machine, but with KVM+RDP"/><published>2023-03-06T00:00:00+08:00</published><updated>2023-03-06T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/en-us/2023/03/06/kvm-test-run</id><content type="html" xml:base="https://www.wannaexpresso.com/en-us/2023/03/06/kvm-test-run/"><![CDATA[<h2 id="just-because-its-not-enough">Just because it’s not enough</h2><p>The office server is just enough.</p><p>But about 45.67% of the reason is that there is a barbarian who is trying to snatch a machine, wanting to occupy another one; and about 62.72% of the reason is that there is a coward who pretends to be deaf and dumb, unwilling to sacrifice his own machine. So, in this world line where the probability is 108.39%, there is an individual who is willing to contribute, willing to be the benevolent one who takes a loss.</p><p>But it turns out to be still too little.</p><h2 id="making-something-out-of-nothing">Making something out of nothing</h2><p>Although the coward is a bit stingy, hearing that the good-hearted person has no machine to use, it feels uncomfortable, but it’s not easy to conjure a machine out of thin air.</p><p>“Since you can’t make something out of nothing, then you can only try ‘building a pagoda in a snail shell’ or ‘sailing a boat in the prime minister’s belly.’”</p><p>The coward decides to use KVM to split one machine into two, one for themselves and one for the kind-hearted individual. Even if the barbarian acts up again, they won’t be able to find a reason to snatch it again.</p><h2 id="running-ubuntu-server-with-kvm">Running Ubuntu Server with KVM</h2><h3 id="installing-libvirt">Installing libvirt</h3><p>According to the timid one’s understanding, KVM is just a way of virtualizing a virtual machine, and the actual simulation of hardware for virtual machines to use is still done by QEMU, with Libvirt commands being the true managers of virtual machines. Therefore, it is necessary to first install QEMU and libvirt.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">--no-install-recommends</span> qemu-system libvirt-clients libvirt-daemon-system qemu-utils
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--no-install-recommends</code>: Do not install recommended packages. If graphical management tools are not needed, this option can be selected.</p><p>As the timid one is cautious, they want their non-root user to be able to manage virtual machines as well. Therefore, run the following command:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>adduser &lt;youruser&gt; libvirt
</code></pre></div></div><p>But that’s not enough. If running virtual machine management command <code class="language-plaintext highlighter-rouge">virsh</code>, it manages virtual machines under the current username. In order to manage virtual machines under the root name, some adjustments need to be made.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virsh <span class="nt">--connect</span> qemu:///system list <span class="nt">--all</span>
</code></pre></div></div><p>As a result, every time a command is used, you have to enter <code class="language-plaintext highlighter-rouge">--connect qemu:///system</code>. How inconvenient is that? Luckily, environment variables can be imported to allow <code class="language-plaintext highlighter-rouge">virsh</code> to manage the system’s virtual machines only.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Place the following environment variable declaration in ~/.bashrc or ~/.zshrc</span>
<span class="nb">export </span><span class="nv">LIBVIRT_DEFAULT_URI</span><span class="o">=</span><span class="s1">'qemu:///system'</span>
</code></pre></div></div><blockquote><p>Reference: <a href="https://wiki.debian.org/KVM" target="_blank" rel="nofollow noopener noreferrer">Debian Wiki/KVM</a>.</p></blockquote><h3 id="configuring-virtual-networks">Configuring Virtual Networks</h3><p>The office lacks more than just space; there’s also a shortage of network cables. The timid one’s machine only has one network cable plugged in, so there must be a way for that single cable to provide internet access for both the host and guest machines. Moreover, the guest machine needs to be able to assign internal IP addresses, or else generous souls will have to handle port mapping, which is quite a hassle!</p><p>Network configurations in libvirt are roughly divided into three types:</p><ol><li>Bridged Network. The host and all guest machines share a network interface, are on the same network segment, each has its own internal IP, and can be accessed directly from the outside.</li><li>NAT Network. The default network mode in libvirt where the host and all guest machines share a network interface but are on different network segments, with the host serving as the DHCP server for all guest machines.</li><li>Routed Network. The host and all guest machines share a network interface, are on the same network segment, each has its own internal IP, but external networks are unaware of the internal network configuration. External devices must configure static routes on the router to allow direct access to the virtual machines.</li></ol><p>As expected, the timid one chose the simplest Bridged Network configuration.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>ip <span class="nb">link </span>add br0 <span class="nb">type </span>bridge <span class="c"># Add a bridge named br0</span>
<span class="nb">sudo </span>ip <span class="nb">link set</span> &lt;device&gt; up <span class="c"># Enable a network device, such as the enp0s2 interface</span>
<span class="nb">sudo </span>ip <span class="nb">link set</span> &lt;device&gt; master br0 <span class="c"># Add the device to the bridge</span>
<span class="nb">sudo </span>ip address add dev br0 192.168.1.142/24 <span class="c"># Set the host bridge's IP to 192.168.1.142</span>
</code></pre></div></div><p>With the bridge now configured, but it will become ineffective after a restart. To keep it active, the <code class="language-plaintext highlighter-rouge">bridge-utils</code> software package must be used.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>bridge-utils
</code></pre></div></div><p>Next, configure the network interface. For example, if the original interface used was enp0s2, replace the original <code class="language-plaintext highlighter-rouge">iface enp0s2 inet dhcp</code> line with the following content:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Set network interface enp0s2 to manual configuration to avoid conflicts with NetworkManager</span>
iface enp0s2 inet manual

<span class="c"># Configure bridge br0</span>
auto br0
iface br0 inet static
    bridge_ports enp0s2
        address 192.168.1.142
        broadcast 192.168.1.255
        netmask 255.255.255.0
        gateway 192.168.0.1
</code></pre></div></div><p>Use systemd to restart the network service, and the network configuration will take effect.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl restart network
</code></pre></div></div><p>Then, in order for libvirt’s virtual machines to use the br0 bridge, the bridge needs to be declared.</p><p>First, create a <code class="language-plaintext highlighter-rouge">br0-bridge.xml</code> file with the following content:</p><div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">&lt;network&gt;</span>
    <span class="nt">&lt;name&gt;</span>br0-bridge<span class="nt">&lt;/name&gt;</span>
    <span class="nt">&lt;forward</span> <span class="na">mode=</span><span class="s">"bridge"</span> <span class="nt">/&gt;</span>
    <span class="nt">&lt;bridge</span> <span class="na">name=</span><span class="s">"br0"</span> <span class="nt">/&gt;</span>
<span class="nt">&lt;/network&gt;</span>
</code></pre></div></div><p>Then run the <code class="language-plaintext highlighter-rouge">virsh</code> command to import the declared configuration.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virsh net-define br0-bridge.xml
</code></pre></div></div><p>Use <code class="language-plaintext highlighter-rouge">virsh net-list --all</code> to view all existing networks.</p><blockquote><p>Reference: <a href="https://linuxconfig.org/how-to-use-bridged-networking-with-libvirt-and-kvm" target="_blank" rel="nofollow noopener noreferrer">Bridged Networking with libvirt</a>.</p></blockquote><h3 id="installing-ubuntu-server">Installing Ubuntu Server</h3><p>Although the timid one is fearful, they are also known for being meticulous. The generous souls have always used the genuinely popular Ubuntu Desktop with the GNOME desktop environment. This time, in order for the generous souls to have an organic experience, the timid one reluctantly dabbled with KVM without a graphical interface. Shouldn’t the virtual machine be configured to their liking?</p><p>However, fate has it that the timid one’s own system originally ran a non-graphical Debian, and Ubuntu Desktop installation requires a graphical environment. Unfortunately, as a compromise, Ubuntu Server was installed with the graphical interface to be solved separately.</p><p>First, download the system image, then install the <code class="language-plaintext highlighter-rouge">libosinfo-bin</code> package to help the <code class="language-plaintext highlighter-rouge">virt-install</code> command recognize the system version:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>libosinfo-bin
</code></pre></div></div><p>Run the <code class="language-plaintext highlighter-rouge">osinfo-query os</code> command to view the system versions supported by <code class="language-plaintext highlighter-rouge">virt-install</code>. Since <a href="https://askubuntu.com/questions/1070500/why-doesnt-osinfo-query-os-detect-ubuntu-18-04" target="_blank" rel="nofollow noopener noreferrer">Ubuntu 22.04 is not included in the list provided by the <code class="language-plaintext highlighter-rouge">libosinfo-bin</code> package</a>, it is necessary to manually download and update the osinfo database from the <a href="releases.pagure.org">libosinfo</a> hosting site.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget <span class="nt">-O</span> <span class="s2">"/tmp/osinfo-db.tar.xz"</span> <span class="s2">"https://releases.pagure.org/libosinfo/osinfo-db-20221130.tar.xz"</span>
osinfo-db-import <span class="nt">--user</span> <span class="s2">"/tmp/osinfo-db.tar.xz"</span>
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--user</code>: Database import location. Choosing <code class="language-plaintext highlighter-rouge">--user</code> stores the database in <code class="language-plaintext highlighter-rouge">~/.config/osinfo</code>, while choosing <code class="language-plaintext highlighter-rouge">--local</code> stores it in <code class="language-plaintext highlighter-rouge">/etc/osinfo</code>, and selecting <code class="language-plaintext highlighter-rouge">--system</code> stores it in <code class="language-plaintext highlighter-rouge">/usr/share/osinfo</code>.</p><p>Edit the virtual machine installation command:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virt-install <span class="nt">--virt-type</span> kvm <span class="nt">--name</span> &lt;domain-name&gt; <span class="se">\</span>
  <span class="nt">--location</span> &lt;path/to/ubuntu-22.04.iso&gt;,kernel<span class="o">=</span>casper/vmlinuz,initrd<span class="o">=</span>casper/initrd <span class="se">\</span>
  <span class="nt">--os-variant</span> ubuntu22.04 <span class="se">\</span>
  <span class="nt">--vcpu</span> 10,maxvcpus<span class="o">=</span>20 <span class="nt">--cpu</span> host <span class="se">\</span>
  <span class="nt">--disk</span> <span class="nv">size</span><span class="o">=</span>120 <span class="nt">--memory</span> 4096 <span class="se">\</span>
  <span class="nt">--network</span> br0-network <span class="se">\</span>
  <span class="nt">--graphics</span> none <span class="se">\</span>
  <span class="nt">--console</span> pty,target_type<span class="o">=</span>serial <span class="se">\</span>
  <span class="nt">--extra-args</span> <span class="s2">"console=ttyS0"</span>
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--name</code>: The name of the virtual machine, also known as the domain name.</p><p><code class="language-plaintext highlighter-rouge">--location</code>: System image location. It can be a network location, such as <code class="language-plaintext highlighter-rouge">https://cn.archive.ubuntu.com/ubuntu/dists/jammy/main/installer-amd64/</code>, or a local path; the <code class="language-plaintext highlighter-rouge">--cdrom</code> parameter can also specify the system image, but only supports local paths. In environments without a graphical interface, it is necessary to use custom kernel parameters to enable a serial console with <code class="language-plaintext highlighter-rouge">--extra-args</code> for system installation, as <code class="language-plaintext highlighter-rouge">--cdrom</code> does not support custom kernel parameters, hence the need for <code class="language-plaintext highlighter-rouge">--location</code>.</p><p>Since <a href="https://askubuntu.com/questions/789358/virt-install-using-location-with-iso-image-no-longer-working" target="_blank" rel="nofollow noopener noreferrer"><code class="language-plaintext highlighter-rouge">--location</code> cannot automatically detect the location of the kernel in the image</a>, the <code class="language-plaintext highlighter-rouge">kernel=casper/vmlinuz,initrd=casper/initrd</code> must be manually specified.</p><p><code class="language-plaintext highlighter-rouge">--os-variant</code>: Operating system type. Use the <code class="language-plaintext highlighter-rouge">osinfo-query os</code> command to view supported versions.</p><p><code class="language-plaintext highlighter-rouge">--vcpus</code>: Initial number of CPU threads. Each virtual thread in a KVM virtual machine is bound to a real thread, so setting the virtual CPU quantity to exceed the real thread count is meaningless. However, a real thread can be simultaneously allocated to multiple virtual machines, with the scheduler handling the assigned tasks of multiple virtual machines, which is why CPU overselling is possible.</p><p><code class="language-plaintext highlighter-rouge">--cpu</code>: CPU configuration. The CPU model and features can be configured. When the model is set to <code class="language-plaintext highlighter-rouge">host</code>, the virtual machine will have all the features of the host CPU, but it may also prevent live migration.</p><p><code class="language-plaintext highlighter-rouge">--disk opt1=val1,opt2=val2,...</code>: Virtual machine storage device. Size can be set using the <code class="language-plaintext highlighter-rouge">size</code> option or the path can be defined using the <code class="language-plaintext highlighter-rouge">path</code> option.</p><p><code class="language-plaintext highlighter-rouge">--memory</code>: Memory size.</p><p><code class="language-plaintext highlighter-rouge">--network</code>: Select the network. Choose the previously created bridge network br0-network.</p><p><code class="language-plaintext highlighter-rouge">--graphics</code>, <code class="language-plaintext highlighter-rouge">--console</code>, <code class="language-plaintext highlighter-rouge">--extra-args</code>: Configure a serial console for the virtual machine to operate directly from the host without a graphical interface.</p><p>By running the above command, you can enter the installation interface, select basic mode, and install the system via a text console.</p><blockquote><p>See <a href="https://linux.die.net/man/1/virt-install" target="_blank" rel="nofollow noopener noreferrer">virt-install(1)</a>.</p></blockquote><h3 id="installing-vnc">Installing VNC</h3><p>Originally, the generous soul could sit in front of their server, pour a cup of water, leisurely turn on the screen, and configure the environment or run code with lukewarm water in hand.</p><p>But since the timid one did not install a desktop environment, the generous soul could not open the virtual machine’s graphical interface. After some consideration, the timid one realized that installing remote desktop access would be more convenient than physically sitting in front of the server. Thus, they gritted their teeth and began configuring it for the generous soul.</p><p>First, VNC requires a desktop environment—don’t forget that Ubuntu Server was just installed! The timid one chose the GNOME desktop environment tailored to the generous soul’s preferences.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>gnome-session gdm3 <span class="c"># Install the gnome desktop environment and gdm3 window manager</span>
<span class="nb">sudo </span>apt <span class="nb">install </span>ubuntu-desktop <span class="c"># Install various packages necessary for the desktop environment</span>
<span class="nb">sudo </span>systemctl set-default multi-user.target <span class="c"># Do not start the graphical environment by default</span>
</code></pre></div></div><p>For the VNC server, TigerVNC was chosen.</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>tigervnc-standalone-server dbus-x11
</code></pre></div></div><p>Configure <code class="language-plaintext highlighter-rouge">~/.vnc/xstartup</code>:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh</span>

<span class="o">[</span> <span class="nt">-x</span> /etc/vnc/xstartup <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">exec</span> /etc/vnc/xstartup
<span class="o">[</span> <span class="nt">-r</span> <span class="nv">$HOME</span>/.Xresources <span class="o">]</span> <span class="o">&amp;&amp;</span> xrdb <span class="nv">$HOME</span>/.Xresources
vncconfig <span class="nt">-iconic</span> &amp;
<span class="nb">export </span><span class="nv">DESKTOP_SESSION</span><span class="o">=</span>/usr/share/xsessions/ubuntu.desktop
<span class="nb">export </span><span class="nv">XDG_CURRENT_DESKTOP</span><span class="o">=</span>ubuntu:GNOME
<span class="nb">export </span><span class="nv">GNOME_SHELL_SESSION_MODE</span><span class="o">=</span>ubuntu
<span class="nb">export </span><span class="nv">XDG_DATA_DIRS</span><span class="o">=</span>/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
dbus-launch <span class="nt">--exit-with-session</span> /usr/bin/gnome-session <span class="nt">--systemd</span> <span class="nt">--session</span><span class="o">=</span>ubuntu
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">--systemd</code>: Only required if the gnome-shell version is below 3.40.</p><p>Configure <code class="language-plaintext highlighter-rouge">~/.vnc/config</code>:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">session</span><span class="o">=</span>ubuntu
<span class="nv">geometry</span><span class="o">=</span>1920x1080
localhost
alwaysshared
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">alwaysshared</code>: All clients will connect to the same session.</p><p>Configure <code class="language-plaintext highlighter-rouge">/etc/systemd/system/vncserver@.service</code>:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>Unit]
<span class="nv">Description</span><span class="o">=</span>Start TigerVNC server at startup
<span class="nv">After</span><span class="o">=</span>syslog.target network.target

<span class="o">[</span>Service]
<span class="nv">Type</span><span class="o">=</span>forking
<span class="nv">User</span><span class="o">=</span>&lt;youruser&gt;
<span class="nv">Group</span><span class="o">=</span>&lt;youruser&gt;
<span class="nv">WorkingDirectory</span><span class="o">=</span>/home/&lt;youruser&gt;
<span class="nv">PIDFile</span><span class="o">=</span>/home/&lt;youruser&gt;/.vnc/%H:%i.pid
<span class="nv">ExecStartPre</span><span class="o">=</span>-/bin/sh <span class="nt">-c</span> <span class="s2">"/usr/bin/vncserver -kill :%i &gt; /dev/null 2&gt;&amp;1"</span>
<span class="nv">ExecStart</span><span class="o">=</span>/usr/bin/vncserver <span class="nt">-depth</span> 24 <span class="nt">-geometry</span> 1920x1080 <span class="nt">-localhost</span> :%i
<span class="nv">ExecStop</span><span class="o">=</span>/usr/bin/vncserver <span class="nt">-kill</span> :%i
<span class="nv">Restart</span><span class="o">=</span>on-success
<span class="nv">RestartSec</span><span class="o">=</span>10

<span class="o">[</span>Install]
<span class="nv">WantedBy</span><span class="o">=</span>multi-user.target
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">-localhost</code>: Allows VNC access only from the local machine. For remote access, SSH secure tunneling is required for forwarding.</p><p><code class="language-plaintext highlighter-rouge">Restart</code>, <code class="language-plaintext highlighter-rouge">RestartSec</code>: When a client logs out, the server will automatically exit. If the server should continue running, add the restart parameters specified above.</p><p>Lastly, start the VNC server via systemd:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="nt">--now</span> vncserver@1
</code></pre></div></div><p><code class="language-plaintext highlighter-rouge">@1</code>: Session number. Setting the session number to 1 will use port 5901; number 2 will use port 5902, and so on.</p><p>Finally, establish an SSH secure tunnel and connect to VNC:</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh &lt;youruser&gt;@&lt;serverip&gt; <span class="nt">-L</span> 9901:localhost:5901
</code></pre></div></div><p>Connect to <code class="language-plaintext highlighter-rouge">localhost:5901</code> with a VNC client, set the client’s quality to “high” if the screen is somewhat blurry, and everything will be splendid.</p><h2 id="just-because-i-took-a-second-glance-at-you-in-the-crowd">Just because I took a second glance at you in the crowd</h2><p>Here, the timid one has spent an entire day just to set up a KVM virtual machine for a kind-hearted person (and it’s the kind with remote access). If you were to ask him why he’s doing all this, he might reply, “Just because I took a second glance at you in the crowd, I knew you were a kind-hearted person. For kind-hearted people, I can’t be too incompetent.”</p><p>With courage, the timid one wanted to tell the kind-hearted person that he prepared a very user-friendly virtual machine for him. But seeing the kind-hearted person busy with other tasks, the timid one hesitated, walked up to him, struggled to speak, then turned and walked away as if nothing had happened.</p>]]></content><author><name>DotIN13</name></author><category term="en-us"/><category term="Linux"/><category term="Ubuntu"/><category term="KVM"/><category term="RDP"/><summary type="html"><![CDATA[Just because it’s not enough The office server is just enough. But about 45.67% of the reason is that there is a barbarian who is trying to snatch a machine, wanting to occupy another one; and about 62.72% of the reason is that there is a coward who pretends to be deaf and dumb, unwilling to sacrifice his own machine. So, in this world line where the probability is 108.39%, there is an individual who is willing to contribute, willing to be the benevolent one who takes a loss. But it turns out to be still too little. Making something out of nothing Although the coward is a bit stingy, hearing that the good-hearted person has no machine to use, it feels uncomfortable, but it’s not easy to conjure a machine out of thin air. “Since you can’t make something out of nothing, then you can only try ‘building a pagoda in a snail shell’ or ‘sailing a boat in the prime minister’s belly.’” The coward decides to use KVM to split one machine into two, one for themselves and one for the kind-hearted individual. Even if the barbarian acts up again, they won’t be able to find a reason to snatch it again. Running Ubuntu Server with KVM Installing libvirt According to the timid one’s understanding, KVM is just a way of virtualizing a virtual machine, and the actual simulation of hardware for virtual machines to use is still done by QEMU, with Libvirt commands being the true managers of virtual machines. Therefore, it is necessary to first install QEMU and libvirt. sudo apt install --no-install-recommends qemu-system libvirt-clients libvirt-daemon-system qemu-utils --no-install-recommends: Do not install recommended packages. If graphical management tools are not needed, this option can be selected. As the timid one is cautious, they want their non-root user to be able to manage virtual machines as well. Therefore, run the following command: sudo adduser &lt;youruser&gt; libvirt But that’s not enough. If running virtual machine management command virsh, it manages virtual machines under the current username. In order to manage virtual machines under the root name, some adjustments need to be made. virsh --connect qemu:///system list --all As a result, every time a command is used, you have to enter --connect qemu:///system. How inconvenient is that? Luckily, environment variables can be imported to allow virsh to manage the system’s virtual machines only. # Place the following environment variable declaration in ~/.bashrc or ~/.zshrc export LIBVIRT_DEFAULT_URI='qemu:///system' Reference: Debian Wiki/KVM. Configuring Virtual Networks The office lacks more than just space; there’s also a shortage of network cables. The timid one’s machine only has one network cable plugged in, so there must be a way for that single cable to provide internet access for both the host and guest machines. Moreover, the guest machine needs to be able to assign internal IP addresses, or else generous souls will have to handle port mapping, which is quite a hassle! Network configurations in libvirt are roughly divided into three types: Bridged Network. The host and all guest machines share a network interface, are on the same network segment, each has its own internal IP, and can be accessed directly from the outside. NAT Network. The default network mode in libvirt where the host and all guest machines share a network interface but are on different network segments, with the host serving as the DHCP server for all guest machines. Routed Network. The host and all guest machines share a network interface, are on the same network segment, each has its own internal IP, but external networks are unaware of the internal network configuration. External devices must configure static routes on the router to allow direct access to the virtual machines. As expected, the timid one chose the simplest Bridged Network configuration. sudo ip link add br0 type bridge # Add a bridge named br0 sudo ip link set &lt;device&gt; up # Enable a network device, such as the enp0s2 interface sudo ip link set &lt;device&gt; master br0 # Add the device to the bridge sudo ip address add dev br0 192.168.1.142/24 # Set the host bridge's IP to 192.168.1.142 With the bridge now configured, but it will become ineffective after a restart. To keep it active, the bridge-utils software package must be used. sudo apt install bridge-utils Next, configure the network interface. For example, if the original interface used was enp0s2, replace the original iface enp0s2 inet dhcp line with the following content: # Set network interface enp0s2 to manual configuration to avoid conflicts with NetworkManager iface enp0s2 inet manual # Configure bridge br0 auto br0 iface br0 inet static bridge_ports enp0s2 address 192.168.1.142 broadcast 192.168.1.255 netmask 255.255.255.0 gateway 192.168.0.1 Use systemd to restart the network service, and the network configuration will take effect. sudo systemctl restart network Then, in order for libvirt’s virtual machines to use the br0 bridge, the bridge needs to be declared. First, create a br0-bridge.xml file with the following content: &lt;network&gt; &lt;name&gt;br0-bridge&lt;/name&gt; &lt;forward mode="bridge" /&gt; &lt;bridge name="br0" /&gt; &lt;/network&gt; Then run the virsh command to import the declared configuration. virsh net-define br0-bridge.xml Use virsh net-list --all to view all existing networks. Reference: Bridged Networking with libvirt. Installing Ubuntu Server Although the timid one is fearful, they are also known for being meticulous. The generous souls have always used the genuinely popular Ubuntu Desktop with the GNOME desktop environment. This time, in order for the generous souls to have an organic experience, the timid one reluctantly dabbled with KVM without a graphical interface. Shouldn’t the virtual machine be configured to their liking? However, fate has it that the timid one’s own system originally ran a non-graphical Debian, and Ubuntu Desktop installation requires a graphical environment. Unfortunately, as a compromise, Ubuntu Server was installed with the graphical interface to be solved separately. First, download the system image, then install the libosinfo-bin package to help the virt-install command recognize the system version: sudo apt install libosinfo-bin Run the osinfo-query os command to view the system versions supported by virt-install. Since Ubuntu 22.04 is not included in the list provided by the libosinfo-bin package, it is necessary to manually download and update the osinfo database from the libosinfo hosting site. wget -O "/tmp/osinfo-db.tar.xz" "https://releases.pagure.org/libosinfo/osinfo-db-20221130.tar.xz" osinfo-db-import --user "/tmp/osinfo-db.tar.xz" --user: Database import location. Choosing --user stores the database in ~/.config/osinfo, while choosing --local stores it in /etc/osinfo, and selecting --system stores it in /usr/share/osinfo. Edit the virtual machine installation command: virt-install --virt-type kvm --name &lt;domain-name&gt; \ --location &lt;path/to/ubuntu-22.04.iso&gt;,kernel=casper/vmlinuz,initrd=casper/initrd \ --os-variant ubuntu22.04 \ --vcpu 10,maxvcpus=20 --cpu host \ --disk size=120 --memory 4096 \ --network br0-network \ --graphics none \ --console pty,target_type=serial \ --extra-args "console=ttyS0" --name: The name of the virtual machine, also known as the domain name. --location: System image location. It can be a network location, such as https://cn.archive.ubuntu.com/ubuntu/dists/jammy/main/installer-amd64/, or a local path; the --cdrom parameter can also specify the system image, but only supports local paths. In environments without a graphical interface, it is necessary to use custom kernel parameters to enable a serial console with --extra-args for system installation, as --cdrom does not support custom kernel parameters, hence the need for --location. Since --location cannot automatically detect the location of the kernel in the image, the kernel=casper/vmlinuz,initrd=casper/initrd must be manually specified. --os-variant: Operating system type. Use the osinfo-query os command to view supported versions. --vcpus: Initial number of CPU threads. Each virtual thread in a KVM virtual machine is bound to a real thread, so setting the virtual CPU quantity to exceed the real thread count is meaningless. However, a real thread can be simultaneously allocated to multiple virtual machines, with the scheduler handling the assigned tasks of multiple virtual machines, which is why CPU overselling is possible. --cpu: CPU configuration. The CPU model and features can be configured. When the model is set to host, the virtual machine will have all the features of the host CPU, but it may also prevent live migration. --disk opt1=val1,opt2=val2,...: Virtual machine storage device. Size can be set using the size option or the path can be defined using the path option. --memory: Memory size. --network: Select the network. Choose the previously created bridge network br0-network. --graphics, --console, --extra-args: Configure a serial console for the virtual machine to operate directly from the host without a graphical interface. By running the above command, you can enter the installation interface, select basic mode, and install the system via a text console. See virt-install(1). Installing VNC Originally, the generous soul could sit in front of their server, pour a cup of water, leisurely turn on the screen, and configure the environment or run code with lukewarm water in hand. But since the timid one did not install a desktop environment, the generous soul could not open the virtual machine’s graphical interface. After some consideration, the timid one realized that installing remote desktop access would be more convenient than physically sitting in front of the server. Thus, they gritted their teeth and began configuring it for the generous soul. First, VNC requires a desktop environment—don’t forget that Ubuntu Server was just installed! The timid one chose the GNOME desktop environment tailored to the generous soul’s preferences. sudo apt install gnome-session gdm3 # Install the gnome desktop environment and gdm3 window manager sudo apt install ubuntu-desktop # Install various packages necessary for the desktop environment sudo systemctl set-default multi-user.target # Do not start the graphical environment by default For the VNC server, TigerVNC was chosen. sudo apt install tigervnc-standalone-server dbus-x11 Configure ~/.vnc/xstartup: #!/bin/sh [ -x /etc/vnc/xstartup ] &amp;&amp; exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] &amp;&amp; xrdb $HOME/.Xresources vncconfig -iconic &amp; export DESKTOP_SESSION=/usr/share/xsessions/ubuntu.desktop export XDG_CURRENT_DESKTOP=ubuntu:GNOME export GNOME_SHELL_SESSION_MODE=ubuntu export XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop dbus-launch --exit-with-session /usr/bin/gnome-session --systemd --session=ubuntu --systemd: Only required if the gnome-shell version is below 3.40. Configure ~/.vnc/config: session=ubuntu geometry=1920x1080 localhost alwaysshared alwaysshared: All clients will connect to the same session. Configure /etc/systemd/system/vncserver@.service: [Unit] Description=Start TigerVNC server at startup After=syslog.target network.target [Service] Type=forking User=&lt;youruser&gt; Group=&lt;youruser&gt; WorkingDirectory=/home/&lt;youruser&gt; PIDFile=/home/&lt;youruser&gt;/.vnc/%H:%i.pid ExecStartPre=-/bin/sh -c "/usr/bin/vncserver -kill :%i &gt; /dev/null 2&gt;&amp;1" ExecStart=/usr/bin/vncserver -depth 24 -geometry 1920x1080 -localhost :%i ExecStop=/usr/bin/vncserver -kill :%i Restart=on-success RestartSec=10 [Install] WantedBy=multi-user.target -localhost: Allows VNC access only from the local machine. For remote access, SSH secure tunneling is required for forwarding. Restart, RestartSec: When a client logs out, the server will automatically exit. If the server should continue running, add the restart parameters specified above. Lastly, start the VNC server via systemd: sudo systemctl enable --now vncserver@1 @1: Session number. Setting the session number to 1 will use port 5901; number 2 will use port 5902, and so on. Finally, establish an SSH secure tunnel and connect to VNC: ssh &lt;youruser&gt;@&lt;serverip&gt; -L 9901:localhost:5901 Connect to localhost:5901 with a VNC client, set the client’s quality to “high” if the screen is somewhat blurry, and everything will be splendid. Just because I took a second glance at you in the crowd Here, the timid one has spent an entire day just to set up a KVM virtual machine for a kind-hearted person (and it’s the kind with remote access). If you were to ask him why he’s doing all this, he might reply, “Just because I took a second glance at you in the crowd, I knew you were a kind-hearted person. For kind-hearted people, I can’t be too incompetent.” With courage, the timid one wanted to tell the kind-hearted person that he prepared a very user-friendly virtual machine for him. But seeing the kind-hearted person busy with other tasks, the timid one hesitated, walked up to him, struggled to speak, then turned and walked away as if nothing had happened.]]></summary></entry><entry><title type="html">在M1 Macbook上不使用Rosetta浪漫地游玩Minecraft+Forge</title><link href="https://www.wannaexpresso.com/2022/08/07/m1-hmcl-hack/" rel="alternate" type="text/html" title="在M1 Macbook上不使用Rosetta浪漫地游玩Minecraft+Forge"/><published>2022-08-07T00:00:00+08:00</published><updated>2022-08-07T00:00:00+08:00</updated><id>https://www.wannaexpresso.com/2022/08/07/m1-hmcl-hack</id><content type="html" xml:base="https://www.wannaexpresso.com/2022/08/07/m1-hmcl-hack/"><![CDATA[<p>距离上一次<a href="/2021/02/20/m1-macbook-minecraft/">在M1 MacBook上优雅地游玩Minecraft</a>已经过去了整整一年的时间。昔日的姐妹们多半已经入职，Minecraft服务器就算开着也无人问津。</p><p>我一直以为这个周末会和夏天所有的周末一样，在炎热与焦躁不安中度过，却没想到又收到了姐妹的微信，问还有没有MC的整合包。</p><p>翻箱倒柜找到去年的目录，文件都还在，只是已经忘记了如何打开——在M1上原生运行Minecraft一直都是个头疼的事儿。</p><p>容我再研究研究。</p><h2 id="世事如常">世事如常</h2><p>“世事无常”，Hello Minecraft! Launcher （HMCL）已经支持调用自定义库文件，LWJGL 3.3.0往后已经出厂自带macOS-ARM64原生组件，<a href="https://github.com/MinecraftMachina/ManyMC" target="_blank" rel="nofollow noopener noreferrer">ManyMC启动器</a>实现库文件替换自动化。</p><p>一眼望去，似乎日新月异，欣欣向荣。只可惜金玉其外，败絮其中，世事如常尔尔。Minecraft至今未有动过为1.19以前的Java版本支持macOS-ARM的念头。想要原生运行Minecraft还得“吃自助”——自己解决LWJGL原生库。</p><h2 id="自力更生">自力更生</h2><p>曾经又有一位哲人曾经说过，“靠自己，靠别人是没有幸福的”。多亏<a href="https://github.com/yaoxi-std" target="_blank" rel="nofollow noopener noreferrer">yaoxi-std</a>，HMCL要想<a href="https://github.com/huanghongxun/HMCL/pull/887" target="_blank" rel="nofollow noopener noreferrer">支持调用动态库文件</a>已经有了现成的解决方案，但调用JAR库文件似乎还没有头绪，每次手动复制也不是办法，万一有哪次忘记关闭“检查游戏文件”，就是前功尽弃。</p><p><a href="https://github.com/yusefnapora/m1-multimc-hack" target="_blank" rel="nofollow noopener noreferrer">yusefnapora的MultiMC自动脚本</a>的脚本给了我启发，HMCL的包装命令（Wrapper Command）功能同样可以实现每次启动时根据版本修改JVM参数，加载相应的原生库文件。</p><p>来活了。</p><h2 id="m1-hmcl-hack">M1 HMCL Hack</h2><p>为什么M1 MultiMC Hack历史已经如此悠久，HMCL就不能有M1 HMCL Hack？是牌面不足吗？今天我就要打破这一魔咒，<a href="https://github.com/DotIN13/m1-hmcl-hack" target="_blank" rel="nofollow noopener noreferrer">M1 HMCL Hack</a>今日Debut。</p><blockquote><p>好像押韵了。</p></blockquote><p>包装命令可以自动识别Minecraft版本，将原有JVM启动参数中的库文件路径替换为正确的原生库文件路径。使用方便，如假包换。</p><p>友情提示：虽然咱们名字叫M1 HMCL Hack，从理论上来说M2应该还是兼容的。</p><h3 id="安装使用">安装使用</h3><h4 id="第一步下载java与hmcl">第一步：下载Java与HMCL</h4><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/zulu18-320-ed0e5cd2b.avif 320w, /assets/public/images/in-post/post-macbook/zulu18-640-6d51cde4f.avif 640w, /assets/public/images/in-post/post-macbook/zulu18-960-14254976c.avif 960w, /assets/public/images/in-post/post-macbook/zulu18-1600-5689c5995.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/zulu18-320-a09c63d63.webp 320w, /assets/public/images/in-post/post-macbook/zulu18-640-f985a821c.webp 640w, /assets/public/images/in-post/post-macbook/zulu18-960-1600d7264.webp 960w, /assets/public/images/in-post/post-macbook/zulu18-1600-ce9a1a852.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-macbook/zulu18-1600-ce9a1a852.webp" width="3104" height="1860"></picture><em>Zulu Java OpenJDK Download</em></div><p>首先至Azul官方网站<a href="https://www.azul.com/downloads/?os=macos&amp;architecture=arm-64-bit&amp;package=jdk-fx" target="_blank" rel="nofollow noopener noreferrer">下载Zulu JDK</a>，下载时可以选择<code class="language-plaintext highlighter-rouge">.dmg</code>安装包直接安装。架构需要选择<code class="language-plaintext highlighter-rouge">ARM 64-bit</code>，另外，由于HMCL需要调用OpenJFX，包类型需要选择<code class="language-plaintext highlighter-rouge">JDK-FX</code>。Java版本则需要依据Minecraft版本来选择。经过我的实际测试，Minecraft与Java的兼容性大致如下。</p><div class="responsive-table"><table><thead><tr><th>Minecraft</th><th>Java</th><th>LWJGL</th></tr></thead><tbody><tr><td>1.19</td><td>&gt;= 17</td><td>3.3.1</td></tr><tr><td>1.18</td><td>&gt;= 17</td><td>3.3.1</td></tr><tr><td>1.17</td><td>&gt;= 17</td><td>3.2.3</td></tr><tr><td>1.16</td><td>&gt;= 8</td><td>3.3.1</td></tr><tr><td>1.12</td><td>&gt;= 8, &lt;= 11</td><td>2.9.4</td></tr><tr><td>1.10</td><td>8</td><td>2.9.4</td></tr><tr><td>1.7</td><td>8</td><td>2.9.4</td></tr></tbody></table></div><p>JDK的各个版本间可以共存，我全都要党胜利依旧。</p><p>接下来，<a href="https://github.com/huanghongxun/HMCL" target="_blank" rel="nofollow noopener noreferrer">下载HMCL</a>，落笔时，最新版为3.5.3.211。</p><h4 id="第二步克隆仓库">第二步：克隆仓库</h4><p>克隆M1 HMCL Hack的仓库到本地。</p><div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/DotIN13/m1-hmcl-hack.git
</code></pre></div></div><h4 id="第三步设置包装命令">第三步：设置包装命令</h4><p>打开HMCL，下载或者导入一个Minecraft实例。</p><p>然后进入实例设置，勾选“启用游戏特定设置”（Enable per-instance settings）。</p><p>再根据上文中的表格选择合适的“Java路径”（Java Path），注意选择第一步中安装的ARM架构Java版本。</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-java-version-320-286bf1c91.avif 320w, /assets/public/images/in-post/post-macbook/hmcl-java-version-640-81d750ae1.avif 640w, /assets/public/images/in-post/post-macbook/hmcl-java-version-960-63d93bbf6.avif 960w, /assets/public/images/in-post/post-macbook/hmcl-java-version-1600-7fa9e66e0.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-java-version-320-d073afe0b.webp 320w, /assets/public/images/in-post/post-macbook/hmcl-java-version-640-f63df02d6.webp 640w, /assets/public/images/in-post/post-macbook/hmcl-java-version-960-0cdafe616.webp 960w, /assets/public/images/in-post/post-macbook/hmcl-java-version-1600-c16ee4fc6.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-macbook/hmcl-java-version-1600-c16ee4fc6.webp" width="2174" height="1314" loading="lazy"></picture><em>Java Mr. Right</em></div><p>向下滚动页面，找到高级设置中的“包装命令”（Wrapper Command），填入<code class="language-plaintext highlighter-rouge">/usr/bin/ruby /path/to/index.rb</code>，此处<code class="language-plaintext highlighter-rouge">/path/to/index.rb</code>应为<code class="language-plaintext highlighter-rouge">index.rb</code>文件在本地的路径。运行游戏时，HMCL会将当前目录切换到<code class="language-plaintext highlighter-rouge">.minecraft</code>目录下，因此，如果填写相对路径，因当以<code class="language-plaintext highlighter-rouge">.minecraft</code>作为起点。</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-wrapper-320-fea162ae9.avif 320w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-640-5d1bf0417.avif 640w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-960-a6f8ef561.avif 960w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-1600-06d6184ed.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-wrapper-320-4d6d58131.webp 320w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-640-e5607a272.webp 640w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-960-337b43568.webp 960w, /assets/public/images/in-post/post-macbook/hmcl-wrapper-1600-0d5a68646.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-macbook/hmcl-wrapper-1600-0d5a68646.webp" width="2174" height="1314" loading="lazy"></picture><em>Setting Up the Wrapper Command</em></div><p>最后，由于HMCL默认禁止在ARM架构的macOS中使用ARM架构Java启动Minecaft，因此需要在页面最底部勾选“不检查JVM与游戏的兼容性”。</p><div class="post-img__container post-img"><picture><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-320-0aa027a80.avif 320w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-640-203714ebd.avif 640w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-960-d6a880935.avif 960w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-1600-a95ae6d45.avif 1600w" type="image/avif"></source><source sizes="(max-width: 600px) 100vw, (max-width: 1024px) 60vw, (max-width: 1600px) 960px, 1600px" srcset="/assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-320-c884b3fc8.webp 320w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-640-292d984ab.webp 640w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-960-62561f63d.webp 960w, /assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-1600-d3e44c0cd.webp 1600w" type="image/webp"></source><img class="zoomable" src="/assets/public/images/in-post/post-macbook/hmcl-no-compatibility-check-1600-d3e44c0cd.webp" width="2174" height="1314" loading="lazy"></picture><em>No JVM Compatibility Checks</em></div><p>至此，点击开始游戏，Minecraft就可以以Nosetta（No-rosetta）状态运行了。对于任何新安装的Minecraft版本，只需要重新操作第三步，即可继续纵享丝滑。</p><h2 id="爱好者开发制">爱好者开发制</h2><p>你可能会问我为什么要做HMCL Hack。我可能会回答你，我想让更多和我一样拥有Apple Silicon，喜欢HMCL的玩家玩上更加流畅的Minecraft。但这一切都有必要吗？或者说，在实然上确实有必要，那么在应然的角度上来看呢？</p><p>Microsoft应该带头解决Minecraft的兼容问题。GLFW应该在小版本更新时考虑到向后兼容性。LWJGL应该对旧版本进行重发布来兼容ARM架构而不是通过大版本更新来解决，以免导致使用旧版库的所有应用都无法在macOS ARM-64bit上运行。</p><p>对于Minecraft这个特别的游戏来说，每个游戏版本无论新旧都有庞大的玩家群体，向后兼容性至关重要。但即便如此，整个上下游依旧各自为阵，到今天为止也只解决了最新版本的兼容性问题，对旧版本选择了全然无视。</p><p>实然与应然有如隔海，理想的家园不知所在。开源的本质是爱好者的分享，这也就好比计算机软件的拥有者与掌握者与使用者签订了契约，将使用和修正的权力赋予用户。但与一般意义上的社会契约不同，用户缺少了推翻“统治者”的权力。虽然开源隐喻着开放，但代码的维护者依旧保有着全部的权力，即便用户可以修正他们所使用的计算机软件，他们也不能对其他用户所使用的软件造成实质的影响，缺乏公信力、缺乏共享渠道，导致了这些用户进行的修正最终只能停留在自娱自乐。这就好比用户只是一个制造假币的小丑，而币制的制定权依旧掌握在某些“爱好者”的手里。</p><p>开源不是一个完全去中心化的过程，它只是将中心进行了转移，从邪恶的大厂手中，转移到了另一部分人手中。如果这些开发者与用户的想法背道而驰，那么失去替代选项的用户也只得言听计从。</p><p>你一言我一语自然是不能成事，但开源是不是也只是“少数人的暴政”披上了羊皮？</p>]]></content><author><name>DotIN13</name></author><category term="Apple Silicon"/><category term="MacBook"/><category term="Minecraft"/><category term="Forge"/><summary type="html"><![CDATA[距离上一次在M1 MacBook上优雅地游玩Minecraft已经过去了整整一年的时间。昔日的姐妹们多半已经入职，Minecraft服务器就算开着也无人问津。]]></summary></entry></feed>