首先介绍一下背景:以下是plan9的故事
例一、替换 CPU
假想一下我们有一台日常使用但性能不佳的笔记本,和一台不在本地但性能强劲的服务器。 我们当然能够使用远程计算机的强劲的 CPU 运行一些计算量特大的程序。这不是什么难事,因为几乎所有操作系统都支持登陆到远程的机器。然而,麻烦的是,如果在远程运行程序需要读写本地的文件,或者访问挂载在本地笔记本上的打印机,扬声器麦克风之类设备,我们除了在本地和远程之间把文件传来传去之外,并没有什么好方法。特别的,如果我们想借用另一台计算机上强劲的 CPU 做音频和视频解码,来播放一个放在本机光盘驱动器里的电影文件的话,我们是不可能指望远程计算机既能读本地的光驱,又能把音频投递到本机的扬声器上的。
Plan 9 中,有一个简单的 cpu 命令,能够让用户自然地使用一个其他机器上的 CPU 运行程序,且仍然能够访问本地的所有文件和设备。也就是说,我们可以用远程计算机上强劲的 CPU 做图像处理,媒体解码等任务,并且可以直接把声音播放到本地的扬声器。cpu 命令给人的感觉,是除了给机器换个了 cpu 外,其他一切都和原来一样。这个看似 “神奇” 的功能,其实在 Plan 9 里实现起来一点都不复杂: cpu 指令首先连接服务器上,然后将本地的所有资源和文件系统,包括窗口管理器,光盘驱动器,扬声器等设备(别忘了他们都是文件),一股脑儿挂载到服务器上,成为服务器上的资源。这样,在服务器上运行的程序,就可以“自然地”使用本地的键盘鼠标和显示器完成交互,还可以访问你本地的显示器扬声器等设备。
cpu 命令真的就是名副其实的换掉了本地计算机的 cpu (其实还有内存)而保留其他一切设备。Plan 9 的这个 cpu 命令,带有强烈的分布式操作系统的特征,而我们平时接触的操作系统都不是分布式操作系统,因此 cpu 这个命令至今在现代主流操作系统上没有完全等价物。
现在这个牛逼的cpu命令移植到了linux系统,它是完全用go实现的。
请看以下简介:
The u-root cpu command
Do you want to have all the tools on your linuxboot system that you have on your desktop, but you can’t get them to fit in your tiny flash part? Do you want all your desktop files visible on your linuxboot system, but just remembered there’s no disk on your linuxboot system? Are you tired of using scp or wget to move files around? Do you want to run emacs or vim on the linuxboot machine, but know they can’t ever fit? What about zsh? How about being able to run commands on your linuxboot machine and have the output appear on your home file system? You say you’d like to make this all work without having to fill out web forms in triplicate to get your organization to Do Magic to your desktop?
Your search is over: cpu is here to answer all your usability needs.
The problem: running your program on some other system
People often need to run a command on a remote system. That is easy when the remote system is the same as the system you are on, e.g., both systems are Ubuntu 16.04; and all the libraries, packages, and files are roughly the same. But what if the systems are different, say, Ubuntu 16.04 and Ubuntu 18.10? What if one is Centos, the other Debian? What if a required package is missing on the remote system, even though in all other ways they are the same?
While these systems are both Linux, and hence can provide Application Binary Interface (ABI) stability at the system call boundary, above that boundary stability vanishes. Even small variations between Ubuntu versions matter: symbol versions in C libraries differ, files are moved, and so on.
What is a user to do if they want to build a binary on one system, and run it on another system?
The simplest approach is to copy the source to that other system and compile it. That works sometimes. But there are limits: copying the source might not be allowed; the code might not even compile on the remote system; some support code might not be available, as for a library; and for embedded systems, there might not be a compiler on the remote system. Copy and compile is not always an option. In fact it rarely works nowadays, when even different Linux distributions are incompatible.
The next option is to use static linking. Static linking is the oldest form of binary on Linux systems. While it has the downside of creating larger binaries, in an age of efficient compilers that remove dead code, 100 gigabit networks, and giant disks and memory, that penalty is not the problem it once was. The growth in size of static binaries is nothing like the growth in efficiency and scale of our resources. Nevertheless, static linking is frowned upon nowadays and many libraries are only made available for dynamic linking.
Our user might use one of the many tools that package a binary and all its libraries into a single file, to be executed elsewhere. The u-root project even offers one such tool, called pox, for portable executables. Pox uses the dynamic loader to figure out all the shared libraries a program uses, and place them into the archive as well. Further, the user can specify additional files to carry along in case they are needed.
The problem here is that, if our user cares about binary size, this option is even worse. Deadcode removal won’t work; the whole shared library has to be carried along. Nevertheless, this can work, in some cases.
So our user packages up their executable using pox or a similar tool, uses scp to get it to the remote machine, logs in via ssh, and all seems to be well, until at some point there is another message about a missing shared library! How can this be? The program that packaged it up checked for all possible shared libraries.
Unfortunately, shared libraries are now in the habit of loading other shared libraries, as determined by reading text files. It’s no longer possible to know what shared libraries are used; they can even change from one run of the program to the next. One can not find them all just by reading the shared library itself. A good example is the name service switch library, which uses /etc/nsswitch.conf to find other shared libraries. If nsswitch.conf is missing, or a library is missing, some versions of the name service switch library will core dump.
Not only must our user remember to bring along /etc/nsswitch.conf, they must also remember to bring along all the libraries it might use. This is also true of other services such as Pluggable Authentication Modules (PAM). And, further, the program they bring along might run other programs, with their own dependencies. At some point, as the set of files grows, frustrated users might decide to gather up all of /etc/, /bin, and other directories, in the hope that a wide enough net might bring along all that’s needed. The remote system will need lots of spare disk or memory! We’re right back where we started, with too many files for too little space.
In the worst case, to properly run a binary from one system, on another system, one must copy everything in the local file system to the remote system. That is obviously difficult, and might be impossible if the remote system has no disk, only memory.
One might propose having the remote system mount the local system via NFS or Samba. While this was a common approach years ago, it comes with its own set of problems: all the remote systems are now hostage to the reliability of the NFS or Samba server. But there’s a bigger problem: there is still no guarantee that the remote system is using the same library versions and files that the user’s desktop is using. The NFS server might provide, e.g., Suse, to the remote system; the user’s desktop might be running Ubuntu. If the user compiles on their desktop, the binary might still not run on the remote system, as the Suse libraries might be different. This is a common problem.
Still worse, with an NFS root, everyone can see everyone’s files. It’s like living in an apartment building with glass walls. Glass houses only look good in architecture magazines. People want privacy.
https://github.com/u-root/cpu
https://book.linuxboot.org/cpu/
离线
畅想应用场景:嵌入式系统受限cpu、内存、硬盘和网络等资源匮乏问题困扰,无法联网,无法编译,无法apt。但有了cpu命令这些问题迎刃而解,嵌入式系统所有文件系统挂载到了强大的桌面电脑上,在电脑上你拥有了嵌入式系统的所有资源,相当于给嵌入式系统换了个强大的cpu,可以任意编译上网apt了。
离线
这跟 HW HarmonyOS PPT 里描述的很像呀
离线
这东西是怎么解决网络的巨大延迟的?
离线
这跟 HW HarmonyOS PPT 里描述的很像呀
哈哈,确实是如此的。华为里面现在就有些人在借华为的人力财力把学术界或业界的东西捣弄一遍,再整个好营销的概念推出来。这都快成套路了。
往好地方想,造轮子也是扩大了就业机会,培养了一批工程师。华为家大业大的,不怕烧钱。我们这些外人也想看看华为能烧出什么好东西,万一开源了也能借用借用。
大家想体验plan9可以试试redox嘛。
离线
完全要依靠网络,有些场合带宽延迟会有影响不太靠谱
离线
所谓的这种,webos早就实现了,但是palm都没有搞起来. 其他公司也不会搞起来,彻底的webos体验性是很糟糕的,还必须保持网络畅通,一断网就不能干啥了.
好的实现是,一部分基础服务本地运行,需要大的算力的应用远程服务器可以加速,如果检测不到远程服务器,那就本地运行
离线
这不就是上个世纪六七十年代PC还没诞生的远古时代所流行的大型机!?
是啊,远古时代IBM这些大公司,工程师都是本地一个鼠标键盘显示器就行,全公司共用一台主机。我上计算机课时老师就是这么介绍的,感觉概念很超前
离线
现在很多 ic设计公司也好似如此的,有代码服务器,所有的代码都会commit在上面。所有的工作报表也是在服务器上填写。每个用户分配一个用户账号和空间。象不象网吧系统?就是这种架构
银行,政府机构等也是如此。 员工面前的只是一台客户终端,只负责输入输出和显示
离线