Question on a BIOS PCI timing setting (performance related)
I seem to recall some issues with VIA chipset boards more so then any others when you adjusted these settings from the default. My thinking is you should check out some of the Tweak guides out there for motherboard/OS combo's to see if there is any benefit from doing this You could end up with something really horr ...
I seem to recall some issues with VIA chipset boards more so then any others when you adjusted these settings from the default.
My thinking is you should check out some of the Tweak guides out there for motherboard/OS combo's to see if there is any benefit from doing this
You could end up with something really horrific or nothing appears to have changed at all...
My thinking is you should check out some of the Tweak guides out there for motherboard/OS combo's to see if there is any benefit from doing this
You could end up with something really horrific or nothing appears to have changed at all...
Participate on our website and join the conversation
This topic is archived. New comments cannot be posted and votes cannot be cast.
Responses to this topic
alec
higher setting is generally better
I had to set mine pretty high to get my video editing stuff to work.
the idea is that on device can hold the bus for longer with a higher setting.
the downside to a higer setting is that it will in turn increase the time it takes for another card to get the bus and if that other card doesn't need it for that long i beleive it will hold it that long anyway.
For you i would go somewhere right in the middle like 64
-Jeff
higher setting is generally better
I had to set mine pretty high to get my video editing stuff to work.
the idea is that on device can hold the bus for longer with a higher setting.
the downside to a higer setting is that it will in turn increase the time it takes for another card to get the bus and if that other card doesn't need it for that long i beleive it will hold it that long anyway.
For you i would go somewhere right in the middle like 64
-Jeff
well alec I am not a device driver programmer so i don't know this stuff for sure either. You really need to ask Jeh at 2cpu.com he can sort you out.
Anyway here is what i remember/know
I read somewhere sometime that the latentcy time that you set is how long the card can hold the bus not how long the card will hold the bus. If this is true then every one should have the latentcy as high as possible. I think that the real reason that the setting is there is not for performance but for compatibility. I.E. as someone mentioned earlier VIA boards have problems with pci cards which may sometimes be resolved by messing with this setting. VIA makes total shit so this makes sense. As I have an i840 chipset and mulitple pci busses on my board this setting really doesn't amount to crap on my system unless you set it too low. If you do that you are then causing bottlenecks i think because you limit how much each card can do at one time there for each card has to do a little bit then wait then do some more and so on. I don't really have any isses with that here because i have the high banwidth cards on the separate 64 bit bus.
Basically all i have is theory just like you but if my theory is right then you want to have it set high maybe even highest possible.
Then again it depends waht you are doing. If it is lots of little requests then you want a medium low setting. Things like a server with lots of random file access perhaps. If you have large sustained transfers then you want a high setting. Like Video capture.
This is pretty much why i recomended something in the middle. People like you and me are multitaskers. I run every application i use in a day all the time.
I don't really do much video editing anymore but setting the latentcy high did make an improvement in the sustained transfer then.
Once again i beleive things i have read stated that it was how long a card could hold the bus. Not that it would hold it that long for every request.
I do web application programing and SQL databases. Which couldn't be further from device driver programming. THe closest thing i do to even memory managment is designing tables in MS SQL so that certain fields will line up on certian datapages.
So basically i am all theory here. but i do have some experience messing with this stuff.
-Jeff
Anyway here is what i remember/know
I read somewhere sometime that the latentcy time that you set is how long the card can hold the bus not how long the card will hold the bus. If this is true then every one should have the latentcy as high as possible. I think that the real reason that the setting is there is not for performance but for compatibility. I.E. as someone mentioned earlier VIA boards have problems with pci cards which may sometimes be resolved by messing with this setting. VIA makes total shit so this makes sense. As I have an i840 chipset and mulitple pci busses on my board this setting really doesn't amount to crap on my system unless you set it too low. If you do that you are then causing bottlenecks i think because you limit how much each card can do at one time there for each card has to do a little bit then wait then do some more and so on. I don't really have any isses with that here because i have the high banwidth cards on the separate 64 bit bus.
Basically all i have is theory just like you but if my theory is right then you want to have it set high maybe even highest possible.
Then again it depends waht you are doing. If it is lots of little requests then you want a medium low setting. Things like a server with lots of random file access perhaps. If you have large sustained transfers then you want a high setting. Like Video capture.
This is pretty much why i recomended something in the middle. People like you and me are multitaskers. I run every application i use in a day all the time.
I don't really do much video editing anymore but setting the latentcy high did make an improvement in the sustained transfer then.
Once again i beleive things i have read stated that it was how long a card could hold the bus. Not that it would hold it that long for every request.
I do web application programing and SQL databases. Which couldn't be further from device driver programming. THe closest thing i do to even memory managment is designing tables in MS SQL so that certain fields will line up on certian datapages.
So basically i am all theory here. but i do have some experience messing with this stuff.
-Jeff
well I don't do device driver programming but I didn't say what i do is easy either.
It is incredably complicated.
as to the real question at hand - does each device hold the bus for the specified latentcy?
NE1 know?
It is incredably complicated.
as to the real question at hand - does each device hold the bus for the specified latentcy?
NE1 know?
for me those links answered the question
the card will hold the bus for the specified number of clock cycles
from here on out it comes down to set and bench untill you find what works best for you
the card will hold the bus for the specified number of clock cycles
from here on out it comes down to set and bench untill you find what works best for you
i say 32 is good 64 is good also
i like 64 cause it is right in the middle
seems like with 64 you can get that good sustained transfer needed for video capture/large file copy but you also get the bus to switch to another task pretty quick.
64 clock cycles at 33 mhz happens pretty friggin quick so that is why you don't see a real humanly noticable diff when you run your system.
i think that the bus latentcy matters most for sustained transfer dependent stuff (you gotta open up a big enough window to get the stuff done before interuption)
also i think that the bus is generally pretty saturated and most cards these days do a lotta crap at once so your best bet is probably the middle of the road at 64.
that is what i have and it runs great.
-Jeff
i like 64 cause it is right in the middle
seems like with 64 you can get that good sustained transfer needed for video capture/large file copy but you also get the bus to switch to another task pretty quick.
64 clock cycles at 33 mhz happens pretty friggin quick so that is why you don't see a real humanly noticable diff when you run your system.
i think that the bus latentcy matters most for sustained transfer dependent stuff (you gotta open up a big enough window to get the stuff done before interuption)
also i think that the bus is generally pretty saturated and most cards these days do a lotta crap at once so your best bet is probably the middle of the road at 64.
that is what i have and it runs great.
-Jeff
btw alec thanks for bringing a new non-boring topic to the forum
it is discussions like this that used to bring me to these boards, now it seems i come here out of habit
as you know it has been a little slow here recently
it is discussions like this that used to bring me to these boards, now it seems i come here out of habit
as you know it has been a little slow here recently