Why is ifconfig obsolete




















Active 5 years ago. Viewed 75k times. With the servers that mount Infiniband cards, when I use the ifconfig command, I get this warning: Ifconfig uses the ioctl access method to get the full address information, which limits hardware addresses to 8 bytes. Because Infiniband address has 20 bytes, only the first 8 bytes are displayed correctly.

Ifconfig is obsolete! For replacement check ip. Improve this question. Totor 2, 3 3 gold badges 21 21 silver badges 30 30 bronze badges. Zhen Zhen 2, 4 4 gold badges 19 19 silver badges 30 30 bronze badges. The sooner you switch, the better. It took me months to replace the muscle-memory of ifconfig. It does make operations on Windows even more fun though. Of course you set up more senseful aliases. I must have been living in a cave — wim. I find it curious that a command called "ip" has the capability to do link-level operations.

It's only deprecated on certain operating systems. If you use stuff outside of Linux, I believe that ifconfig is still in use. I see no such warning on FreeBSD, for example. Show 4 more comments. Active Oldest Votes.

Show network devices and configuration ifconfig ip addr show ip link show Enable a network interface ifconfig eth0 up ip link set eth0 up A network interface is disabled in a similar way: ifconfig eth0 down ip link set eth0 down. Improve this answer. Fel Fel 1, 1 1 gold badge 9 9 silver badges 16 16 bronze badges. Interesting how in every use case you mentioned the ip command is longer and more complex. Probably going to be a primary reason people still use ifconfig — TheLQ. TheLQ: ip provides much, much more features.

Of course it is more complex. Anyway, many commands can be shortened. When I run ifconfig eth0. I can deduce that the one with fffe in the middle is the permanent address but I have no idea what the address it will use for outgoing connections. And it's only a few more keystrokes to type.

Yes, the UNIX way has a fundamental flaw. It gets even worse when you realize that entire scientific disciplines have decided that shell scripting and files on disk is the best way to do stuff. There's nothing like an iterative algorithm that loads and saves it's large data. There's nothing like an iterative algorithm that loads and saves it's large data to disk multiple times per iteration.

That's not a problem inherent to scripts; if you do it wrong, you can do it that way with a coded program, too. Some things, of course, should just not be done with shell scripts. For those things, there is perl :.

Being able to code things fast and have them execute slow is not a "fundamental flaw". It is a valid design choice and actual experts understand that. I write many programs that I end up running exactly once after they're tested and working. Runtime might vary from milliseconds to a week.

But the thing that impacts my time is writing the code, not getting on with other stuff while the computer works for me. The number one rule of the Unix way is "everything is a file". PS, ifconfig, etc read the information in proc and format it for human consumption.

There is nothing in the Unix way that says "get confused about what is an application programming interface proc and what is a UK report ifconfig ". That's called inotify. If you want to be compatible with systems that have something other than inotify, fswatch is a wrapper around various implementations of "call me when a file changes".

Polling is normally the safest and simplest paradigm, though, because the standard thing is "when a file changes, do this". The biggest problem is that a file can change WHILE you're doing something , meaning it will re-start your function while you're in the middle of it. Re-entrancy carries with it all manner of potential problems. Those problems can be handled of you really know what you're doing, you're careful, and you make a full suite of re-entrant integration tests.

Or you can skip all that and just use synchronous io, waiting or polling. Neither is the best choice in ALL situations, but very often simplicity is the best choice. I agree - I don't care how the commands work under the cover as long as they have the names and syntax I expect since ages ago. Most of the listed programs aren't really performance hogs anyway so there's not a huge reason to rewrite them unless there's some serious safety issues or stability issues that has to be taken care of.

Please don't generate time wasters trying to force people find new commands with new syntaxes doing what the old commands did fine. It's not worth it. Not only not worth it, but a major pain in the ass if you're trying to build scripts that need to be deployed on multiple platforms. And not infrequently, they're all deployed in the same shop. This is reminiscent of Windows, "embrace, extend and extinguish" strategy.

Linux or more accurately, Linux distributions seems to be acquiring all of the characteristics that drove people to abandon Windows for Linux in the first place. That's "assign ip address somewhere", "route the table", and all that. In other words, it's a modeling difference. On linux However, the deeper issue is the interface that netstat, ifconfig, and company present to users.

No, that interface is a close match to the hardware. Here is an interface, IOW something that connects to a radio or a wire, and you can make it ready to talk IP or back when, IPX, appletalk, and whatever other networks your system supported. That makes those tools hardware-centric. At least on sane systems. It's when you want to pretend shit that it all goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are IP-only-stack-centric.

Which causes problems. For example because that only does half the job and you still need the aforementioned zoo of helper utilities that do things you can have ifconfig do if your system is halfway sane. Which linux isn't, it's just completely confused. As is this blogposter. On the other hand, the users expect netstat, ifconfig and so on to have their traditional interface in terms of output, command line arguments, and so on ; any number of scripts and tools fish things out of ifconfig output, for example.

It outputs lots of stuff I expect to find through netstat and it doesn't output stuff I expect to find out through ifconfig. As the Linux kernel has changed how it does networking, this has presented things like ifconfig with a deep conflict; their traditional output is no longer necessarily an accurate representation of reality.

But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of there so far has caused me problems that were best resolved by getting rid of that crap code. Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that causes problems and solves very few. There are things that could be better with ip. IIRC it's very fussy about where the table selector goes in the argument list but route doesn't support this at all.

I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly nobody cares enough to add all the missing functionality. Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've never looked at the sourcecode to see whether it's mostly common or mostly separate.

The people who think the old tools work fine don't understand all the advanced networking concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP can be assigned to multiple interfaces, there's more than one routing table, firewall rules can add metadata to packets that affects routing, etc.

These features can't be accommodated by the old tools without breaking compatibility. Someone cared enough to implement an entirely different tool to do the same old jobs plus some new stuff, it's too bad they didn't do the sane thing and add that functionality to the old tool where it would have made sense.

It's not that simple. The net-tools maintainers or anyone who cared could have started porting it if they liked. They didn't. What happened was organic. If someone brought net-tools up to date tomorrow and everyone liked the interface, iproute2 would be d. Well, keep the syntax too, so old scripts would still work.

The old command name could just be a script that calls the new commands under the hood. Perhaps this is just what you meant, but I thought I'd elaborate. Idiots that confuse "new" with better and want to put their mark on things. Because they are so much greater than the people that got the things to work originally, right? Same as the systemd crowd.

Sometimes, they realize decades later they were stupid, but only after having done a lot of damage for a long time. Unfortunately, Linux decided to only support s. Unix was founded on the ideas of lots os simple command line tools that do one job well and don't depend on system idiosyncracies.

If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation. Polling proc has worked across eons of linux flavors without breaking. Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm.

It's why GNU was so popular and it's why people even think that Linux is unix. That idea is the character of linux. It's not.

It's merely UNIX-like. I like Linus. He's smart and he is a great coding leader. If he decides to put his foot down even though his ingerence, in principle, is only the kernel maybe Linux can be salvaged. Otherwise, it's just another shitty Windows wannabe, just with smaller usage share. If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation Stop standing in the way of progress. There is a new order now. Follow it or get out of the way.

Oh, yes. But seeing that takes actual insight and experience. The ones pushing for these new "faster" tools usually lack both. What moron has called the tool SS?

I thing someone who does not check Google first. It is not only Unix history being wiped here. Installing more programs by default makes the install image larger in megabytes and take longer in seconds. The size increase due to stuff like netstat and ifconfig is trivial. Where the bloat comes from is needing python, java, javascript, often in various versions to make a system run. There is absolutely no reason this crap needs to be mandatory.

And talk about expanding the attack surface. Then you would have to compete on merit, i. It rarely is. People would just keep using the old ones and ignore the new ones.

This would be a blow to the egos of the people reinventing the wheel and hence can't be allowed. Just wasted several hours ripping his crap out of armbian. Now things actually work in the way I want and I can customize things easily. Who ever has seen a netstat or ifconfig run taking more than a second or two? Unless you put them in a tight loop, you won't ever notice the difference in the load of the system.

I have never, ever , in decades of sysadmin'ing, worried about how much of the above ifconfig or netstat take. Worried about efficiency? In the aggregate you'll waste more CPU- and man-hours compiling and debugging your replacement tools than using ifconfig or netstat will. Go spend that time on something useful.

For me, it is still unable to perform clean shutdowns. At least the filesystem has journaling to help recover from systemd's problem. Funny you should say this, because RedHat now has systemd with xfs as the filesystem. And for some reason xfs likes to pretend it's written data when it hasn't, a lot more so than ext4 ever did. So, if an unclean shutdown occurs during a massive file op, such as a yum update, you end up with a seriously broken system with a bunch of 0 byte files in place of major libraries and binaries.

Not on my Linux. Gentoo will have this stuff around forever. Still using OpenRC too without a hiccup. Bit of a pain to keep up-to-date, but it always does what I want, with no talking back. And that is just the thing: Once you are an actual system administrator and not just a glorified user with delusions, this is priceless.

In theory netstat, ifconfig, and company could be rewritten to use netlink too; in practice this doesn't seem to have happened and there may be political issues involving different groups of developers with different opinions on which way to go. No, it is far simpler than looking for some mythical "political" issues. It is simply that hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how old stuff works.

They like writing new stuff, instead. Partly this is because of the poor documentation: explanations of why things work, what other code was tried but didn't work out, the reasons for weird-looking constructs, techniques and the history behind patches.

It could even be that many programmers are wedded to a particular development environment and lack the skill and experience or find it beyond their capacity to do things in ways that are alien to it.

I feel that another big part is that merely rewriting old code does not allow for the " look how clever I am " element that is present in fresh, new, software. That seems to be a big part of the amateur hacker's effort-reward equation. One thing that is imperative however is to keep backwards compatibility.

So that the same options continue to work and that they provide the same content and format. If that was lost, there would be little point keeping it around. The new commands should generally make the same output as the old, using the same options, by default.

Using additional options to get new behavior. If you dont like to haven these in your special kernel, the proc filesystem can be disabled For those systems which need something more efficient i suppose things involving heavy use of containers or virtualization , use the new interfaces.

I'm growing increasingly annoyed with Linux' userland instability. For those who are advocating the new tools as additions rather than replacements: Remember that this will lead to some scripts expecting the new tools and some other scripts expecting the old tools.

You'll need to keep both flavors installed to do ONE thing. It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap. Therefore, the only logical alternative is that they are paid in some way to break what is working.

Also, if you rewrite tons of systems tools you have plenty of opportunities to insert useful bugs that can be used by the various spying agencies. On the other hand, we have the ip command suite.

This command-line tools package is the new kid on the block, relatively speaking, and has been chosen as the way forward by the bleeding edge of Linux users. With added functionality and a steadily growing user base, the ip command is a serious contender for your muscle memory or aliases. So let's take a look at these two commands to see what is on the ip command suite offers. You know your favorite ball cap?

The one that has the sweat stains inside the headliner, but throwing it on just feels right? That's ifconfig. It's safe, it's familiar, and you feel comfortable using it. The ifconfig command still has a lot to offer its users.

Whether its displaying network settings, configuring an IP address or netmask, creating aliases for interfaces, or setting MAC address, ifconfig can handle it. Let's take a look at how to use ifconfig to accomplish some more common tasks you may find yourself working on completing. This is the most basic and overused form of the ifconfig command.

Chances are, you are running this to get information about a particular interface, and while this works, it will probably over-deliver. As you can see, there is a lot of information to sift through here. So how can we narrow it down to exactly what we want? Many times you just want to look at a specific interface.

To view details for a specific interface, use the standard ifconfig command followed by the interface name. For example:. With the ifconfig command, you can do far more than just view configurations. Let's take a look at how to enable and disable an interface. The same goes for disabling an interface. Only instead of up we use down , as seen here:. This value allows you to limit the size of packets sent over the specific interface.

I once worked a data replication failure for over three weeks before figuring out that the MTU was too large for the replication interface. I say this to make a point. Not every network interface can support jumbo packets.



0コメント

  • 1000 / 1000