Is C Still Relevant in the 21st Century?

shutterstock_Family Business

shutterstock_Family Business

Many programming languages have come and gone since Dennis Ritchie devised C in 1972, and yet C has not only survived three major revisions, but continues to thrive. Large chunks of Windows were written in C, along with most of Linux.

But aside from this incredible legacy, what keeps C atop the Tiobe Index? The number of jobs on for C programmers is not huge, and many of those also include C++ and Objective-C. On Reddit, the C community, while one of the ten most popular programming communities, is half the size of the C++ group. (Of course, after more than four decades, maybe there’s not a whole lot of new material published about C!) (Aside from this article, of course.)

To find C programming jobs, click here.

Despite being overshadowed by other languages, I believe C remains relevant for the following reasons:

It’s Easy to Learn

The only advanced features in C are pointers and function pointers. Once you’ve mastered those, you’ve pretty much learned the language. Knowing C provides a handy insight into higher-level languages—C++, Objective-C, Perl, Python, Java, PHP, C#, D and Go all have block syntax that’s derived from C. And reference variables in C# will be easier to understand because you know C pointers.

It’s Still Used

There is an immense amount of software written in C that’s still used, including Apache and NGINX Web servers, MySQL, PostgreSQL, SQLite, Ingres database, GIMP, CPython, Perl 5, PHP, Mathematica, MATLAB and most device drivers.

From the end of the 1980s until the early 2000s, developers relied on C to develop games, with C++ taking over after that. There’s so much C source code still around that learning to program games in C using the SDL library is not hard.

The Internet

The Internet is basically driven by C applications. Most browsers are written in C++, but C code is used for the infrastructure, mail sending utilities, DNS utilities, etc.

Some modern compilers generate C as an output stage. This saves the compiler-writer having to create a code generation stage for each platform.

Need for Tight Coding

The increased availability of low-cost processors with small amounts of RAM and ROM requires tight coding, and C fulfills that role perfectly.

It’s not been all rosy for C, especially with Internet-facing code; many of the vulnerabilities that have plagued Microsoft and other vendors are due to C functions that don’t do bounds-checking and end up called by buggy code. (Networked computers weren’t so commonplace back in the day, and no one predicted that malware writers working remotely would seek to exploit these unsafe functions.) These vulnerabilities have now been examined and a large number of C functions banned from use, replaced with safer versions that have an extra parameter (usually a limit value).

Newer C Compiler Support

Fifteen years on, the C99 standard is largely supported in compilers such as GCC and Clang, along with several commercial ones. The C11 standard, however, is still too new to be fully implemented, although it has partial support. It’s a reasonable guess that the most popular version of C is still C89 (also known as ANSI C). But with CPUs having greater numbers of cores, it’s likely that C11 will be a necessity in a few years because of its thread support with the threads library.

Is C Still Relevant?

Yes. It’s easy to learn, there’s a lot of it still in use, and plenty of free or open-source compilers. While it may not get you a job, it will give you an excellent grounding in low-level programming. It’s not growing in popularity… but it’s not going away anytime soon either.

Upload Your ResumeEmployers want candidates like you. Upload your resume. Show them you’re awesome.

Related Articles

Image: Family Business/


48 Responses to “Is C Still Relevant in the 21st Century?”

December 09, 2014 at 12:53 am, Clinton Staley said:

I teach C programming as a professor, and I can testify that it’s *not* easy to learn. Or at the least, it’s not easy to teach. For instance, the pointer manipulations possible and common in the language require visualization that many students find difficult. Relevant despite its age, yes. Small and focused syntax and semantics, sure. Easy, not so much.


December 10, 2014 at 8:44 am, Tom Conlon said:

I agree with you Clinton.

When learning C I became very used to restarting the computer when I handled pointers incorrectly.

It is fast and powerful and because you work close to the machine it is also dangerous – not, it is not easy to learn.


December 10, 2014 at 10:02 am, Davis Hernandez said:

I think it’s easy and simple to teach, also great to teach pointers cause is not like modern languages where everything is a pointer (like trying to explain what air or water is, it become so natural that you can’t)

The problem is how do you do to explain… if you a developer in a Senior level, you already live in a very complex world, it will be really hard to explain so simple stuff… unless you have really high pedagogic skills. Then your students will fell the language is hard. I think you must change your approach, if you feel C is hard to teach, the problem is your current perception of C, of programing. Remember that if your really teach it, those new programers should have easy day with another languages!!!


December 10, 2014 at 10:06 am, Davis Hernandez said:

Ups, just want to update my previous post, I think I was talking about C++ more than just C


December 10, 2014 at 11:55 am, Dan Sutton said:

Agreed. C is not easy to learn at all. It’s a bit like playing the bass: everyone can be Bill Wyman, but playing it properly is a skill which requires infinite patience.


December 11, 2014 at 3:30 am, Justin Halls said:

It might be easier to consider teaching C in the context of something like a PIC microprocessor. Once you get away from the abstractions of large computer operating systems the use of pointers in relation to memory addresses becomes much clearer and even intuitive. After all a PIC processor is about the same degree of complexity as the machines that C was originally designed for.


December 15, 2014 at 12:26 am, Kasun said:

I against with the idea of ‘easy to learn’ term as C programs are one of the most complex implementations I have worked. Its not about the syntax that hard, but the power offered to a programmer is quite high and difficult to handle without good skills. By the way, C is still practically the most powerful language so far.


December 09, 2014 at 4:03 pm, Majid alDosari said:

I think you might say it’s “easy to learn” because of its few features. But pointer stuff will be forever a stumbling block.

There is now a push to Python to teach introductory programming..which has it’s merits. Unfortunately, and I say this as a Python advocate, if you don’t know C, you don’t know how a computer works. I’m still relatively young, but my “CS101” was C/C++.


December 10, 2014 at 2:44 am, Schlingel said:

As long as there’s software written in C, C will stay. At the moment we got the Linux kernel, Git and a whole other bunch of applications which have to be maintained. And that’s on such complex hardware stacks like PC and Mac!

Additionally hardware drivers are likley to be written in C. Even when they’re written in C++, the bindings to other languages like to Java or .Net, are likely to be written in C because it’s easier and more streightforward.

Microcontroller programming is also likely to be done in C. (Eventhough C++ gets stronger and stronger in that field.)

But my opinion could be biased. I started learning programming back than with a Borald Builder and C. My final project in school was to programm a network stack with MISRA C.


December 10, 2014 at 2:53 am, Derek Hunter said:

Yes, pointers can be difficult to understand but that’s the point of “learning” things isn’t it? If someone doesn’t understand the concept and use of a pointer then they don’t understand programming and I would say they were basically “scripting”.

C is great because it is small and very close to the machine. C++ is awful because it puts a distance between the programmer and the metal.

Remember – there’s no such thing as “an object”, it’s all just bits and bytes.


December 10, 2014 at 7:01 am, Joren said:

It’s perfectly fine that you think C++ is awful. But to say that it puts a distance between the programmer and the machine, is just plain wrong. C++ still has all the features that makes C be so close to the metal. It just offers additional facilities, like OOP and Templates. OOP and templates are means that offer new layers of abstraction, which can be used to the programmers advantage. Programmers that because of these features don’t know what’s going on anymore are bad programmers, but it’s not the language that’s bad because of them.


December 10, 2014 at 4:58 am, Pedro said:

“Remember – there’s no such thing as “an object”, it’s all just bits and bytes.”

Remember, there’s no languages.. only 1s and 0s ……..


December 10, 2014 at 4:59 am, Michael Johansen said:

In Kernel mode C is the only suitable language.


December 10, 2014 at 6:14 am, Riccardo ITA said:



December 15, 2014 at 12:29 am, Kasun said:

Yeah I agree (Y)…


December 10, 2014 at 6:25 am, Francis W. Porretto said:

Of course C is still relevant. As the lowest-level hardware-independent language, it’s a valuable resource for persons who need to work “close to the bare metal” but need to remain independent of specific hardware characteristics. Also, as the article mentions, comprehension of C is essential to the mastery of any of the higher-level languages founded on its syntax and scope rules.

Is C the language of choice for the typical new application of today? No. But relevance merely requires that there exist a problem domain to which C is an applicable and useful tool. As we mathematically inclined types like to say “quod erat demonstrandum.”


December 10, 2014 at 6:57 am, Jay Sistar said:

First of all, object oriented programming is a style, and there are objects in C. You can define them as a struct (or typedef struct to put them in the main type namespace as C++ does), and every function that uses them takes a pointer of that struct. A function naming standard would help, as would every class feature in C++ because they’re all conventions, but it’s not hard to do in C, if you follow conventions.

C++ is a standardization of conventions, and that’s a really good thing, as you could probably see by the number of incompatible convention establishing “class” libraries that Javascript has because although Self was faster than Smalltalk, prototypes are not faster to implement, and they are certainly not automatically type checkable (you have to do that yourself, and sometimes you establish a convention for doing so…).

However C is very relevant because you see exactly what’s going on. There’s nothing hidden from you. Sometimes I really want the old translate to C style of C++ compilers back! I’d prefer to program in C++, which pisses Linus off, but makes me more productive.

To say that pointers are a problem (if you’re not talking about the unsafe code in the C standard library, which is a problem) really means that you don’t understand a process memory layout, you probably don’t understand MMUs, memory segmentation, or memory paging, and basically you live in an abstract world where you’ll never understand what’s happening.

The fact is C is easy. Not only that, ASM is easy (it’s not portable, but it’s another stepping stone to understanding low-level operations, which ARE necessary to be understood). The real problem is that no one teaches programming starting from the lowest level, then go up. If you don’t start with a calculator, and then move to ASM (which to a beginner is like a programmable calculator), and move on to C, which is the first really beneficial abstraction level, then move on to C++, and maybe even go on to functional languages such as ML and Haskell, then “You’re doing it wrong!”


December 10, 2014 at 6:59 am, Adam said:

I like this article, and that it goes beyond the normal fanboy stuff, but it’s important to mention that a good C programmer is fluent in ASM and can easily translate between the two. It’s also important to mention C++ is completely different than C, and more similar to Python, PHP, Visual Basic, etc. Hence why C++ is so easy. Dennis Ritchie was also not the sole inventor C.


December 10, 2014 at 7:24 am, Andy Shissler said:

There really aren’t too many options if you’re writing embedded software. And I’m talking about “real” embedded software where the memory resources and processor speed are limited, and there may or may not be a RTOS. It’s either C or C++. That’s it. And C++ software has a higher memory need since it relies on a heap more than C. Let’s just say that C++ programmers I know have a higher tendency to use the new() operator than C programmers with malloc().


December 10, 2014 at 8:02 am, Adam said:

Actually, the author should stick to writing about script kiddy languages like 90% of your articles (Python, C#, BASIC (lol), etc). It’s obvious you don’t know about low level programming (if you did maybe you’d have an actual job doing it?).


December 10, 2014 at 8:13 am, Tim Palo said:


«C is great because it is small and very close to the machine. C++ is awful because it puts a distance between the programmer and the metal.»

This is not true. Pointers are only a subset of iterators, dynamic allocation is not necessary at all (placement new) and the overhead of many C++ constructs versus similar C structures is negligible when it isn’t exists at all.

Have a look at reference text as «Scott Meyers, “Effective C++ in an Embedded Environment»,


December 10, 2014 at 8:32 am, Hammad Rauf said:

Yes. I think so because maintenance of legacy code and new Operating Systems will need it. For a new CPU either you can start building the Higher Level tools in Assembly (or binary!) or you can use C, whose compiler is usually available. C ISO Standards process also seems to have improved the language itself over the years.


December 10, 2014 at 10:24 am, DDrew said:

I have been programming in C and C++ for about 35 years. I would say I have them figured out pretty well by now. I think there is nothing in C that C++ does not have or do. Using classes can be very handy, overloading is a PITA mainly. Some college professor needed some of the over the top odd things in C++ just to show off. Just my opinion however. Keep the code as simple and pedantic as possible I say because someone else is going to come along later and need to quickly comprehend what you have done. The more brain cycles you put them through in doing that just slows things down and leaves opportunity for errors. If you ever get into any of my code, you will think a freshman wrote it with not experience… right… I’m lovin’ it!


December 10, 2014 at 11:14 pm, Clinton Staley said:

Perhaps I can defend the reputation of college professors :), by pointing out that operator overloading is not just for making cute Complex number classes. Its true mission in C++ is to support the template system. If a container template, for instance, expects its instantiating type to allow < comparison, you will need to define that operator for your instantiating type to use the template. And template libraries like the STL or Boost greatly simplify code and make it more reliable; they're worth supporting.

Besides, pretty much all of this stuff was added by Stroustrup, who AFAIK wasn't a professor :).


December 10, 2014 at 10:30 am, chaconne said:

@Tim Palo don’t read a book look at real embedded implementations. c++ adds a lot of cool features but they increasingly abstract away from the machine. To deny this is to largely deny one of the main reasons for c++ in the first place. ‘c’ is clear, concise, and provides most of the tools required for programming excellent long lived complex programs to which large portions of Linux/Android, Mac(NetBSD portion at least)/IOS, and WinNT are all a testament to. Selling books like Meyers is one thing, systems programming in the real world is another. c++, python, and all the other languages have their place for sure, but it will be a long time before ‘c’ is displaced from the jobs it does best.


December 10, 2014 at 11:29 am, Željko Perić said:

I like C, I don’t like C++, I tip on C# with new compiler for native code ? 🙂


December 10, 2014 at 11:45 am, Jimboy said:

C is a bit like latin. It will be relevant as long as people bother to learn it. At some point learning it will offer no advantage and it will transition into a state of historical-only relevance.


December 10, 2014 at 2:30 pm, Jay Sistar said:

C did replace Pascal for most people, so it could happen again. I’d welcome it, as C has very strange parsing rules: ( However, I haven’t come across a good language for making operating systems other than C++, which has even more strange parsing rules.

Perhaps the same could be said of English.


December 10, 2014 at 11:57 am, Alan8 said:

Here in Michigan, there seems to be a steady demand for C programmers to work on embedded systems for vendors for the auto industry.

Studying C will prepare you for C++, Java, and C#, which build on the basic C syntax. It will also give you one of the most efficient languages, which can be used for compute-intensive tasks.


December 10, 2014 at 2:46 pm, BAhmed said:

‘C’ is very much relevant. Applications or user side may not write code in ‘C’ and may prefer currently popular languages such as ‘C/C++’ C#. But on the system side, writing drivers, kernel code, BIOS, firmware and embedded platforms, ‘C’ is the preferred language.


December 10, 2014 at 4:44 pm, Simon said:

I take issue with Adams comment “It’s also important to mention C++ is completely different than C,”
They are not different, They are the same language or at least you can either view C++ as C with extensions or C a subset of C++.
Either way you can take a C program and compile it with a C++ compiler. OK there are a few constraints but you can do it. I use a C++ compiler for most of my embedded C projects.
You have to remember where C++ came from. When C++ first saw the light of day there was no C++ compiler, there was only a preprocessor for a C compiler.

Is C faster than C++? No! they are the same language. What make the difference is the way thy are used.
In C++ people use the extended object orientated features and and they forget what happens underneath when they do this. They tend create objects that depend on other objects, which in turn depend on further objects and so on. Unless you are very careful you run the risk of having no idea how long something will take to run.
People tend to think the C is simple the simple language and C++ is the complex one. This is not so you can write simple C++ programs and C can be every bit as complex as C++ (its just hard work). The advantage of C++ is that it makes the complex stuff easier to manage.

The disadvantage is that C++ is still as low level as C, lets face it object references are still only pointers. That is probably why more modern languages were created.
They eliminate all of the unnecessarily complex syntax and remove the dependency on the memory model so that we have projects that are easy to manage, easy to scale massively and will not crash when the pointers are wrong.

If you are writing a big application then there is no question you have to look at modern languages if you want to get straight to the core of the subject and not be swamped by the project management.
Not to mention the huge libraries of functions and api that they provide so you don’t need to reinvent the wheel every time.
Maybe that’s actually the point. the languages haven’t changed that much, its the support that has.

So is C dead? If you are writing applications then probably. If you are writing drivers or embedded software then absolutely not, nothing has emerged that is good enough to replace it.

If you are a modern programmer, writing applications, do you need to learn it? Probably not but I would suggest you should learn it if you want to write the best applications.
If you know whats going on underneath you will write better code.


December 10, 2014 at 7:30 pm, Kaz said:

C is easy to learn — incompletely and badly.

You can know most of the features and syntax, and be able to break down a given piece of code by its anatomy — know the shape of every declared object, and follow the maze of indirection with pointers and function pointers —, yet not be aware of the correct usage of all the constructs which avoids undefined behaviors.

C is not easy to *use* to solve a problem.

Even if you know the features, it is not obvious how to put them together to implement some requirements. You have to know how to solve problems in that type of language with that level of abstraction.

Easy to learn versus easy to use: different things.

C programs are sometimes not easy to *debug*.

Security issues and memory leaks are found in years-old C programs written by experts.


December 10, 2014 at 7:39 pm, Kaz said:

Here is another thing: the people matter.

“gets shit done” beats “programming language choice”.

Case in point: Linus Torvalds.

Torvalds needed a better version control system than what was freely available, so he hacked up Git. Git is wildly successful and widely used.

Torvalds wrote it in C.

What’s more, he wrote it pretty fast.

Git is the sort of thing that should probably be written in a higher level language, but this issue is overcome by motivation and hacking skill.


December 10, 2014 at 8:02 pm, zonafets said:

C is not difficul to learn or teach, no more than learn VB, Python, Perl, C#, Java, Javascript, Pascal, PHP, Ruby, Erlang, Heskell, OOP and so on, to survive in a world of beginner that believe that is possible write more declarations to do less or write less code to do more.

Difficult is write and learn 28MB of source code of a class for db management and have 10 levels of inheritance to not directly write:

sqlite_exec_printf(db, “INSERT INTO table1 VALUES(‘%s’)”, 0, 0, 0, zString);

that can work on other 90% of rdbms but probably we will use it on no more than 3.

30 years of evolutions of IDEs and no one that warn us of

printf(“This is wrong %d’,’text’);
^^ ^^ int expected

And now? re-wheel with GO, DART, SWIFT …

C is a glue for everything and everybody. Other languages are accademic expressions of tries of a Big producer to bring water to own mill.


December 10, 2014 at 9:54 pm, Andrew Lankford said:

C has its faults, like any language. But no one has succeeded in developing a better C than C. Seems to me that plenty of programmers could agree on its faults and on the niches that it fills ..and come up with a successor language to easily replace it. Guess not though.


December 10, 2014 at 11:10 pm, Greghe said:

C: the power of assembly language with the flexibility of assembly language!



December 11, 2014 at 3:44 am, Josh said:


“30 years of evolutions of IDEs and no one that warn us of

printf(“This is wrong %d’,’text’);
^^ ^^ int expected ”

Actually, the CLANG (LLVM) compiler that ships with XCode does warn you.

I quote: “Format specifies type ‘int’ but the argument has type ‘char *’ ”

I’ve been using C mainly for speeding up slow parts of an Objective-C application and that works fine.

But mostly, I find C so much more enjoyable. Doesn’t seem to get mentioned a lot that C is fun.


December 11, 2014 at 4:34 am, Ramana said:

I am a professor and I voluntarily teach ‘c’ for the first year grad students. The problem is that there are many good teachers of ‘c’ or any other programming languages. Because most of the teachers teach ‘c’ syntax, but not concepts (e.g. Why do you need a pointer, structure, union, bit fields etc.).
This makes students think that programming in ‘c’ is very difficult and cultivates a fear in them.
I love ‘c’ and I think it should be the first programming language to learn (properly) as it is the closest PL to the HW. can do OOP in c and we actually wrote a very big system in c using OOP concepts for Telecom industry on VAX machines way back in 1987.


December 11, 2014 at 10:57 am, Bill said:

I became fascinated by computers in the 50s when, by today’s standards they were crude calculators. I was constrained by storage for programs and data. Of course I learned Assembly for lots of machines: the 1130 series, the 360/370s, 6800, z80 etc., etc.. I even wrote microcode for machines!

When I went off to college I discovered to my delight C. I understood pointers (I used them all the time), I understood bounds checking and parameter validation. I could write a statement in C and “know” what the compiler would do. Sometimes I actually find myself “seeing” each C statement as it would be written in Assembly. I was still constrained by storage, so I learned to pack bits for booleans, etc..

C is indeed close to the hardware and good programmers who are writing code to run the hardware will generally prefer C over Assembly. Any language that obscures understanding exactly what is being done at the lowest level will be avoided. Kernel programmers will always try to write “tight” code that is not wasteful of resources. C facilitates this process, you just better understand what you are doing and why!

However, and most important, programming is not understanding a language. Good programming is knowing how to take advantage of the system and write good code based upon good algorithms. Knowing when to use a bubble sort vs an insertion sort is rather important! Yes, a trivial example, but illustrates the point. A skilled programmer will use or develop the proper algorithm, no matter what the language is. But always understanding the fundamentals will certainly make clear the methodology to be employed.

I have interviewed graduate engineers who cannot give the correct answer to a simple problem: a = 2 + 3 * 5! They say a = 25 more than half the time. Sad but true! Yes the compiler will do the math correctly, but if you don’t know the basics, you don’t really know how to solve a problem — you are not a programmer.

Someone made a statement that shocked me recently. He said, there is no longer any need for programmers. All the solutions are available on the internet, all you need to do is cut and paste!
When that same man goes out to find a solution, will he look for the answer to be 25? Likely so.

A discussion of C going away is, to me, a bit silly. Lets graduate students who know the fundamentals. Languages that abstract the basics make for lucky coders, not programmers.


December 11, 2014 at 11:48 am, Yves said:

I like C very much for its expressive yet concise syntax. It’s THE language for resource-tight systems: would you program an Arduino device in any other languages (beside AVR assembler :-)) ?

On the other hand, it handles like a scalpel. An expert creates marvels with C. A rookie might create a bloody mess :-).

Long life to C!


December 14, 2014 at 6:04 pm, Bill said:

That is why I would not, in general, hire a “C” programmer with no experience. Aside from understanding he basics, he / she is usually not skilled at building supportable code! Worse,
they don’t think objectively. That is, they assume “this cannot happen” or worse, do not even see the possibility of failures in the design or implementation.


December 11, 2014 at 12:20 pm, Paul Kay said:

One quibble: The security faults mentioned (“many of the vulnerabilities that have plagued Microsoft and other vendors”) are not C or even C library issues. They are lazy programmer issues.

While C does not do bounds checking, most of the library calls do have bounded versions. In either case, if you think like a kernel developer – never trust the caller – the issues go away.


December 14, 2014 at 6:09 pm, Bill said:

Absolutely true! Failures of the design and programs are not faults of the language.
They are certainly the faults of the programmers! Since when do people assume you can put five gallons into a three gallon bucket? Just foolish. Not a failure of the language, just coders who never think about …failures?


December 13, 2014 at 8:33 am, DuskoKoscica said:

For me C++ is the no1 language.

It is easy and very expresieve and if C does the trick, than who cares!


December 14, 2014 at 6:16 pm, Bill said:

Of course, you can be lazy! You don’t need to think as much. However, C++ does require you handle exceptions properly. Your faith is in the compiler and support libraries to protect you.

Likely the compiler is written in “C” (perhaps with support of lexx and yacc). The run-time library is probably written by people who will implement what a good “C” programmer will naturally include in the implementation.

No disrespect, just believe people who really understand the fault(s) possibilities will write better code regardless of language.


December 15, 2014 at 2:12 am, John Aspras said:

I Could be ranting about this all day..
but I will comment this.

If you want to build software that matters. You will use C / C++

don’t forget C++ is an extension of C


January 21, 2015 at 3:19 pm, cym13 said:

I am very uneased when I see this kind of post.

First, I must say that I love C and that I have a very good background at assembly and higher level languages such as Python.

I agree very much that C is mandatory to progress into serious Computer Science, be it only because everybody just assumes that you’re fine with C syntax and concepts.

But C is difficult, to learn, and to use. The relatively little number of features is balanced by the fact that you are expected to build every structure you need yourself. This is not easy. Furthermore, everybody seems to forget all those (often compiler-specific) keywords: extern, unsigned, pragma…. And I’m not even talking about macros. Of course when you start writing you don’t have to deal with those, but it prevents you from reading most serious code that are full of interlaced macros for portability…

C is not a portable language. One can obviously write a C program that will run on many systems, but the price in term of complexity is often *huge*. Being more portable than assembly isn’t enough to call it portable.

Last but not least, most of the arguments behind the choice of C are technician’s arguments, not engineer’s. What I mean here is that a technician will choose the performance before all, but the engineer has to balance it with real life problems. In real life, you think about cost. There are two kind of cost, the performance cost (execution time, ressources needed, memory used) and the human cost (development, debugging, updates, security, …). Most modern applications don’t need more than a decent level of performance, in most applications the end-user doesn’t even see the difference between a program written in C and one in Python even if the later is way slower than the first. But the python program will take less time to write, be more concise, easier to read, more portable, easier to distribute, easier to debug, safer…. For some applications, the performance cost is the most important of the two, but that’s definitely not the general case. We have to stop thinking as technicians, we have to think like engineers.


May 15, 2016 at 5:27 am, Zed said:

Would anyone be able to give me some advice on beginning to learn c programming.

Would the people who criticise the language give it a rest, to be able to develop the language in the first place deserves a pat on the back for the originators.
Without it, no audio codecs, no Microsoft os kernels or Linux kernels, and correct me if I’m wrong but is used for writing firmware. And used to develop other programming languages.


Post a Comment

Your email address will not be published.