The programming thread.

By the time better options were available they were already so heavily invested in (and reliant upon) COBOL-written software that changing to something new would be incredibly complicated and expensive.
Didn't they resurrect COBOL programmers for the millennium problem?
 
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway
I've worked some more with my Mandelbrot project. The graphics output was indeed the cause of unexpectedly low performance. I had designed a very flexible system for drawing different kinds off geometric shapes. Worked good enough for that type of objects. But not for output of bitmaps. I've added direct bitmap support, and now calculating and displaying a matrix of 800x800 points, up to 200 iterations, 14 threads takes 50 ms. 1000 iterations (necessary for sufficient resolution) takes 69 ms. Good, but not good enough if I want to zoom in and out smoothly. And I want that.

So, here comes CUDA (at least I hope so).

14 threads seems to be optimal. More than that it runs slower. I don't know if I can control allocation of cores/hyperthreads directly, if possible I could perhaps get it to run faster (on a Ryzen 7950X with 16 cores). But I have still a lot to learn about threads (half way through a course on Udemy), perhaps there are yet things to do to speed it up. BTW, as mentioned the threads don't share data, so there's no data racing.

pibbuR running at millions of cerebral parallel threads.

PS. I have considered a more general GPU library like SYCL which supports both Nvidia and Amd cards, even Intel (I think) but apparently SYCL still doesn't work well on Windows. DS.

PPS. I have considered developing my hobby programs for both Windows and Linux. I fear there may a lot of work porting it now, even if I have tried to make it as general as I can. I guess I should have developed it in parallel from the start. DS.
 
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway
14 threads seems to be optimal. More than that it runs slower.
Is it fixed, or are the tasks dynamically distributed to the next available thread? I know it's a little more complex to program, if you have to do it yourself, but it pays off when another process suddenly steals a few cores (OS update, Windows reporting to MS all the details about what you're doing, etc.).

In many languages, the user has little information about the maximum number of threads, so it's not always easy to manage. In Rust, I have to add an external library that does some wizardry to know the value, or another one that calculates the number of available threads, assuming others things can be running. Fortunately, there's yet another library that handles all that automatically and provides a nice API to parallelize things; you can feed it a vector of tasks and it'll schedule them dynamically. It gives something like this, where Mandelbrot is parallelized at the line level:

C++:
// bounds = width, height
let mut pixels = vec![0; bounds.0 * bounds.1];

let bands: Vec<(usize, &mut [u8])> = pixels
    .chunks_mut(bounds.0)
    .enumerate()
    .collect();

bands.into_par_iter()
    .for_each(|(i, band)| {
        let top = i;
        let band_bounds = (bounds.0, 1);
        let band_upper_left = pixel_to_point(bounds, (0, top), upper_left, lower_right);
        let band_lower_right = pixel_to_point(bounds, (bounds.0, top + 1), upper_left, lower_right);
        render(band, band_bounds, band_upper_left, band_lower_right);
    });
(I have to render that in C++ because there's no Rust syntax highlighting).

Natively without parallelization, it's into_iter instead of into_par_iter, so a minimal difference. Perhaps there's an equivalent of that lib for C++?

I have no idea how scheduling works on a GPU.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
I start all the threads at the same time. I began programming as soon as I had learnt just enough to do it, I'm pretty sure there are useful things I don't know yet. At the moment the design is very simple. The first thread processes row 1, 15, 29..., second thread takes care of line 2, 16, 30.... All memory is allocated before entering the threads.

BTW, the C++ STL also has some support for parallel programming, I haven't looked at that yet.

larsg
 
Joined
Nov 11, 2019
Messages
2,706
Location
beRgen@noRway
I've done a lot of thread programming in my days starting back in the late 90's. The last one had a configurable number of threads and you could change the number of threads in a configuration file (i guess that is why it was configurable ;) )
-
However this reminds me of an interesting code review i did in the early 2000's - approx 2003. It was a threaded python code and he had one thread reading a variable and another thread modifying the variable. So i ask him why he didn't use a lock around the variable that was changing. He said it wasn't needed because the chance the variable would change while being access was very low.

Now he was both right and wrong. It turns out in those days python threads were simulated and not real so in reality at the time he wrote the code while it was technically wrong it was pragmatically right. However at some point they changed the python library to use real threads ....

It was very difficult to get him to change his code at the time of the code review because he was very insistent that race conditions can't occur AND he was a senior senior coder.... i was glad when he left the company (actually he was a really nice guy he just knew jack shit).

We (not I but the company) did run into a problem when the linux kernel was updated to use real kernel threads and not simulate threads and a lot of software suddenly broke.
--
As for my little program it only had 100 threads - no big deal - if one is done correctly they are all done correctly- the beauty of threads ;)
 
Joined
Jun 26, 2021
Messages
796
He said it wasn't needed because the chance the variable would change while being access was very low.
Scary to hear things like that, especially if the guy doesn't really acknowledge your remarks.

That being said, I made the same mistake, although it was the equivalent in microelectronics, where race conditions are much subtler when there are several clock domains. And of course, you can only see that once the chip is made (and even then, you may only see it once in a blue moon, and wonder what just happened).

Added to the multithreading, there's also the cool concept of coroutines. Programming with both is made so much easier with a language like Kotlin; I would have hated to do the same Android apps with Java.

I did only a little of multithreading in Python, like a mini webserver providing an interface to interact with batch servers, but that's where I think the language isn't at its best.
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
Scary to hear things like that, especially if the guy doesn't really acknowledge your remarks.

That being said, I made the same mistake, although it was the equivalent in microelectronics, where race conditions are much subtler when there are several clock domains. And of course, you can only see that once the chip is made (and even then, you may only see it once in a blue moon, and wonder what just happened).

Added to the multithreading, there's also the cool concept of coroutines. Programming with both is made so much easier with a language like Kotlin; I would have hated to do the same Android apps with Java.

I did only a little of multithreading in Python, like a mini webserver providing an interface to interact with batch servers, but that's where I think the language isn't at its best.
My problem was the response AFTER the issue was found. We all make mistakes - i've made quite a few over 30 years of programming. One took me 5 years to find (it was an obscure chip used in a victor 9000; it was actually a very nice chip in that it had buffering on the comm port - something the ibm 8088 pc lacked).
-
This reminds me of another crappy situation where i found (again during code review) a misuse of a pointer by some code a person had written. It was so serious that i told their manager to make sure they fixed the issue. Well the code went into production and behold they did not fix it and it caused major issues. Again something that was totally preventable (basically they copied complex code from someone who did a double indirection (** in c++ syntax); but they did not have a pointer to a pointer but rather just a normal pointer. All they had to do was remove the one '*'. Stupid shit. They quit 3 months later and went to apple (a company notorious for writing bad code - imho).
 
Last edited:
Joined
Jun 26, 2021
Messages
796
Well, that's what I meant by 'acknowledge your remarks'. Perhaps it wasn't clear.
No you said it fine; i understood but it still boils my blood that people do such stupid things. 20 years of reviewing code and i saw some of the dumbest things by people the company considered senior coders.

On the other side of the coin i pointed out some relatively minor issues to a good programmer and it was meant more as informative about how systems interact but they then spent the next two weeks looking for solutions.
 
Joined
Jun 26, 2021
Messages
796
No you said it fine; i understood but it still boils my blood that people do such stupid things. 20 years of reviewing code and i saw some of the dumbest things by people the company considered senior coders.

On the other side of the coin i pointed out some relatively minor issues to a good programmer and it was meant more as informative about how systems interact but they then spent the next two weeks looking for solutions.
Yes, I can understand how it's unnerving. I had a few encounters like that, but fortunately, not too many.

There are also fundamentally different approaches, like top-down planning, Agile, or TDD, so it's not always easy for some programmers to understand what's important or not when they develop a part of code. That's why reviews should normally help.
 
Last edited:
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
I wasn't sure whether to put this here or in the Linux thread. It's a nice little (30') talk with Linus Torvalds and Dirk Hohndel. They talk about the real-time Linux finally making it to the kernel, long-time open-source development and 'old' maintainers, the current Rust vs C polemic in the kernel community, finding the next big open-source project, etc.

View: https://www.youtube.com/watch?v=OM_8UOPFpqE

PS: Talking about the Rust vs C issues,
Linus: It reminds me of when I was young and people were arguing about vi versus Emacs
Hohndel: They still are!
 
Joined
Aug 29, 2020
Messages
12,482
Location
Good old Europe
Unix hackers write C code with cat>
Unix gurus write assembly code with cat>
Unix wizards write device drivers with cat>

The complete hierarchy: https://www.levenez.com/unix/guru.html

pibbuR who assumed this also applies to linux and refuses to disclose his level.
Bah, amateurs. I used Minix before Linux even existed...
 
Joined
Dec 26, 2007
Messages
2,197
Unix hackers write C code with cat>
Unix gurus write assembly code with cat>
Unix wizards write device drivers with cat>

The complete hierarchy: https://www.levenez.com/unix/guru.html

pibbuR who assumed this also applies to linux and refuses to disclose his level.
What about assembly coders who wrote code with type because cat didn't exist yet ?
 
Joined
Jun 26, 2021
Messages
796
Back
Top Bottom