I am sure he just posted a wrong value on that site. If we calculate it with how many cuda cores they have compared to GTX 970, yes it would be around 680Mh/s for GTX 960 and around 500 Mh/s for GTX 950. But that is just theory. GTX 750 Ti gives about 450 Mh/s due to some posts in forum.
Hmm, the 750 Ti is an oddball. It's a first-generation Maxwell with a wimpy memory bus. I would have guessed about 20-25% of the performance of a 980, significantly less than 450. Talking about cgminer here, not ccminer.
Well it is a Maxwell anyway (The GTX 750 Ti, GTX 750 it's not a Maxwell) and a good thing with it, is that it almost never exceeds it's TDP at only 75 Watt (many of them don't even have a single 6-pin power connector).
It is the most powerful nVidia card without an auxiliary power connector. And I get 455 to 465 depending on the wind. Intensity 12. I 14 did not give me more.
I am running 2 machines with 2 GTX970's each, and 1 older machine with a single GTX960. The 970's show about 1.24 Gh's per card. The 960 shows about 734 Mh's. Been using both Suprnova and Coinmine pools, Coinmine has been showing less rejects, this evening anyway. Still experimenting with what works best.
I haven't rebased on the latest commits, however, here's what I've got for hashrates (NSFW): https://ottrbutt.com/miner/decredwolf-02032016-2.png What's more useful is the power savings I get on my FX-8370. The stock one from the Decred github eats 350% - 450% CPU (or 3.5 - 4.5 cores spinning constantly) - while mine does slightly better hash and uses 110% - 120% CPU (or 1.1 - 1.2 cores spinning full-time.) Test CPU is an FX-8370 at stock clocks.
I am getting 1.265GH/s on 970 & 466Mh/s on 750ti with tpruvot's ccminer. Guys please donate to him for his work. You can find his donation address on miner download page. https://github.com/tpruvot/ccminer/releases/tag/1.7.2-tpruvot
Hi, I want to make my 9800GTX+ run a little faster. When I use --gpu-engine 800 --gpu-memclock 1100 is does not give me an error but I can see in GPU-Z that its still running on the lower speeds... This same option works fine with my AMD card, same cgminer. any ideas how to overclock an nvidia card?
Can anyone tell me what would be best practice in setting intensity on cgminer? -I 14 for example. I have a 5850 card and I use intensity 14 that seems to be the max in cgminer. Is there any reason to use lower intensity and why? Will a lower intensity result in better speed? I dont really understand the intensity setting. Any advice on this would be appreciated, I'm trying to squeeze every KH/s extra out of my GPU
If it's your display card too, lowering intensity will help prevent interference with your display. I think it can also help if you are running hot.
I use "d" for dynamic instead of a number. Cgminer will decide what is best. This helps with what @chappjc mentioned. Mostly I think d helps if you are using the gpu for display as well. I'm thinking of playing around with the intensity just to see. Watch your HW (hardware errors) if you set it higher. Setting it too high will burn out your graphics card and it will cause more rejects. Before playing with Intensity I would suggested getting rejects down to 0. Check out this page on using cgminer for doge (same idea applies here): "Intensity: A value between 1-20, this is a general setting that makes our graphics cards work harder. Putting it on 20 will put your cards into high speed. Intensity is usually the last parameter that you will set, after you tweak everything else. Changing the file to this alone will make your hash rate go way up: setx GPU_MAX_ALLOC_PERCENT 100 setx GPU_USE_SYNC_OBJECTS 1 cgminer.exe -I 19 Where -I 19 is for intensity. ... You can see the hash went up from 20kHs to almost 400kHs. But you can also see that at the top, there is “HW:2″. This is bad, it means we set the intensity too high. We can try lowering it until we have no more HW errors. So, Intensity is the clear cut choice to get fast results in your mining speed. But there are other parameters that you may want to try out first."
Big Thanks to Epsylon3, xCore, Davec and everyone who helped me get running. i've learnd a shit load from you peeps. The win cuda ccminer is up on suprnova.cc 2x nvidia tesla avg 800MH/s for both minner: https://github.com/tpruvot/ccminer/releases/tag/1.7.2-tpruvot pool: https://dcr.suprnova.cc/index.php?page=gettingstarted
thanks for that, I did mine Dogecoin before, and I was able to set a higher intensity back then (same GPU as I'm using now). The cgminer v004 ( decred build from github ) won't allow a value higher than 14. Is this because of the way it is compiled? I dont get any HW errors with -I 14 and I dont use the pc for anything else but mining. The display is very responsive so I'm sure it could handle an even higher intensity. I've maxed out all the settings and it is still running at 68 celcius So I'm thinking cgminer is holding back performance, maybe I could have a better hashrate if intensity would be allowed to set highter.
I doubt it. Since the version of CGMiner they based it on is stupid, your REAL intensity when you choose 14 is 29.
So with this ccminer, NVIDIA cards mine with CPU usage close to 0% ? Are you on Windows or Linux? That's very interesting. On cgminer we have quite a CPU load. Hope we'll get an improved version of it soon to lower the CPU usage while we only GPU mining.
I dropped it to 1/4th of what it was on mine. I could remove it almost entirely, but this would require editing code that the official Decred devs are likely to change (for other reasons) and as such, I don't want the headache of merging.
Both. There must be a significant component of the algorithm in cgminer that runs on the host only. Either that or this is just part of how OpenCL operates in practice. I'm only familiar with CUDA programming.