Dmcrypt performance sucks on my fileserver

I have a fileserver running x86_64 version of ArchLinux. The boot drive is unencrypted, all data disks use dmcrypt with aes-xts-benbi cipher, keysize 512bit (256bit effectively), I use the aes_x86_64 module.
The encrypted devices are in two LVM volume groups, specifically "data" and "backup". When copying files between these two the performance is awfully bad. A full rsync of my 1.3T storage took more than a day to complete, rsync showed 13MB/s when it finished.
The server got a Athlon64 3200+ Winchester Core (single core) running hat 2Ghz. When I copy files between the two volume groups CPU usage is almost 100%.
I don't think this behaviour is normal, the TrueCrypt benchmark on my 1.86Ghz notebook achieves speeds that are way beyond the transfer rate of any of my harddisks, and I would expect dmcrypt to be on par or faster than the TrueCrypt AES implementation.
Can anyone help?

Thanks for the reply.  I am not home so I can't check much of that right now, but I *think* my BIOS is 1.6.  Honestly, i am new to overclocking and I am more than a little nervous to flash the Bios as I don't know what I'm doing and I have no money to replace my mobo if I screw up.
Where can I find that information about my CPU?  This is the one I bought (about a month and a half ago):
http://www.newegg.com/Product/Product.asp?Item=N82E16819103537#DetailSpecs
Also, within BIOS I can only raise the Vcore to 1.45.  Is that because of my BIOS or my PSU?  When I say I can't go above 1.45, I mean there isn't even the option to raise it higher.  Thanks again for the help.
Edit:  I have tried the HTT multi at 2x and 3x but I don' know if I have tried 2.5.  Can 2.5 make that much of a difference over the others?

Similar Messages

  • Oracle VM performance issues on OVS 2.2 - java 64bit (jre 1.6) applications

    In an effort to move to pvt cloud & thanks to Oracle's database licensing policy on vm's we have decided to use Oracle Virtual Server as the hypervisor, OEL5+ as the database OS, and 11GR2 Grid Single Node RAC with ASM as the database. Our application can be on RHEL5+ or OEL5+ doesnt matter.
    Here is my configuration (at present all componets running on only 1 physical server) :
    1. OVS 2.2 on BL465G6 with 32G memory, primary storage repository is off fiber channel from a 3PAR array.
    2. 40G individual LUNs physical disks presented to all application servers (Now i was using shared virtual disks & image files with file: driver and then blktap driver, for both performance sucks with file:driver being worst). I have tried OEL5+ templates, with RHEL5+ para virtualized OS as the current one.
    3. On database side --> ASM disks are physical as well (just converted). Overall, the only filebased images off the repository is one of the middleware servers (MQ) , loadbalancer (Zeus) and Routing Software (Vyatta)
    We are running a loadrunner test and here are the following results.
    Perf App : physical linux box running appl
    Perf DB : physical solaris box running db
    Cloud App : RHEL5 paravirtualized vm on phy: disk
    Cloud DB : Oracle 11GR2 1 node Rac , Grid Installation , ASM disks are physical , OCR/Vote disks are physical
    Perf App with Perf DB     Cloud App with Perf DB     Cloud App with Cloud DB
    140.08s     219.432s     226.476s
    Also, i have noticed during the test only 1 vCPU is utilized. remaining stay idle.
    I have run several disk benchmarking tests to determine that phy: disk presented is the best option for reads and writes. I have also run tcpdump and removed the loadbalancer and routing software out of the equation by creating a flat single subnet network amongst the appl and dB (both running on the same physical server). Tcpdump shows not much fragmentation or re-transmission and shows a similar pattern what we would see in the physical environment. The application version/ java & jre version is same on both the vm and physical server. Only difference is dB (in physical world it runs on Solaris, here it is running on OEL vm)
    Note : On the OEL templates NUMA was turned off. turning that ON yielded an overall performance gain of 30%. Now i need to fine tune further. So my questions based upon my observations are :
    1. vCPU calibration : Can i pin 1 physical CPU to 1 vCPU ? How does this scheduling happen or controlled.
    2. Why is only vCPU showing utilization while all others stay idle. I have 4 to 6 vCPUs per vm
    3. Any tuning to be done at the OVS layer?
    4. Any tuning to be done at the vm's. (Note in our physical environment, apart from hardening the OS and stripping off unnecesarry rpm's we do not modify the kernel neither do we modify any TCP and/or memory buffer parameters)
    Oracle support is spending days trying to figure out my issue. I have uploaded OSWatcher data so many times. I need to escalate the case but i have always received better support & information, yet quick from the forum. Any pointers would be helpful.

    What are you missing?
    I inherited this app and signing the third party jars is how it was setup, I was wondering the same thing too, why was it necessary to sign the third party jars?
    The applet runs in either JRE 1.6.0_13 or JRE 1.6.0_27 depending on the other Java apps the user uses. JRE 1.6.0_13 does not have the mixed code security (so it is like is disable), but JRE 1.6.0_27 does have the mixed code security and the applet will not launch with mixed code security enable, so we have to disable it. With all the hacking going on in the last two years, is important to improve security; so this is a must.
    Yes, I always clear up the cache.
    Any idea on how to resolve this problem?

  • Illustrator CC Performance Issues

    I've been using CC for awhile and have realised the performance issues that it has. I use a laptop with 8GB ram which runs Illustrator fine but it never seems to use more than 300MBs of memory when running so it can lag badly when I use lots of text and vectors. I'm talking about whenever I click on something - I have to wait 5 seconds before it registers that I've clicked on it - and then when I move the object - I have to wait another 5 seconds. That on top of the fact that I have to wait 10 seconds for it to register that I've zoomed out. I'm studying Graphic Design at University so this makes it almost impossible to do my work when it's like this. Please don't say anything obvious like close down other running software or restart your computer, I've tried all that. I just want to know how to allocate it more RAM to run on, since there is no Preferences > Performance like on Photoshop. Or any other way that I could improve performance. Thanks

    Nobody can know. We know nothing about your system (beyond that it has 8 GB of RAM), your documents, your settings, screen resolution, input devices and whatever. Just saying that performance sucks is not particulalrly useful, but at the same time it could be perfectly normal within what you can expect from the old crooked lady that is aunt Illy...
    Mylenium

  • GF4MX440-T8X AGP Performance

    MSI web site: http://www.msi.com.tw/program/products/vga/vga/pro_vga_detail.php?UID=370&MODEL=MS-8890
    Specifications Performance numbers for AGP memory performance is 3.2 GB/sec, which is lower than the 4X MX440 (6.4GB/s) and MX440SE (5.3 GB/s)models.
    Is this a typo?

    I HAVE BEEN HAD!  X(  X(  X(  No wonder this card's performance sucks; they amputated its legs at the knees!
    Here is the response I recieved from MSI Technical Support regarding the accuracy of the specs for an 8890:
    Dear Sir/Madam,
    Thank you for your inquiry.
    MS-8890 is with 64bit memory interface and mclk is 200*2.
    So the memory bandwidth is 200*2*64/8=3.2GB/s.
    Please feel free to let us know if you still have any further problem.

  • Problem with Dockingsta​tion Type 2504 and R60 LAN Performanc​e

    Hello,
    we have a smal problem with a lenovo dockingstation type 2504 and a r60 laptop.
    Made some performance test for oure fileserver.
    Found out, if i use dockingsstation, i get poor performance.
    If i use the r60 without dockingstation, lan performance is realy good.
    Some idea why?
    How could it be?
    Performance profile is max.
    Use battrie in r60, dockingstation with power supply.
    Big thanks

    Ok, so I was able to video chat with Defcom last night by turning on my firewall and also by changing iChat/AIM to logon through port 334. I still cannot video chat with appleutest01. Any other ideas?

  • Bad query plan for self-referencing CTE view query and variable in WHERE clause. Is there way out or this is SQL Server defect?

    Please help. Thank you for your time and expertise.
    Prerequisites: sql query needs to be a view. Real view is more than recursion. It computes location path,  is used in JOINs and returns this path.
    Problem: no matter what I tried, sql server does not produce 'index seek' when using variable but does with literal.
    See full reproduction code below.
    I expect that query SELECT lcCode FROM dbo.vwLocationCodes l WHERE l.lcID = @lcID will seek UNIQUE index but it does not.
    I tried these:
    1. Changing UX and/or PK to be CLUSTERED.
    2. query OPTION(RECOMPILE)
    3. FORCESEEK on view
    4. SQL Server 2012/2014
    5. Wrap it into function and CROSS APPLY. On large outer number of rows this just dies, no solution
    but to no avail. This smells like a bug in SQL Server. I am seeking your confirmation.
    I am thinking it is a bug as variable value is high-cardinality, 1, and query is against unique key. This must produce single seek, depending if clustered or nonclustred index is unique
    Thanks
    Vladimir
    use tempdb
    BEGIN TRAN
    -- setup definition
    CREATE TABLE dbo.LocationHierarchy(
    lcID int NOT NULL ,
    lcHID hierarchyid NOT NULL,
    lcCode nvarchar(25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
    lcHIDParent AS lcHID.GetAncestor(1) PERSISTED,
    CONSTRAINT PK_LocationHierarchy_lcID PRIMARY KEY NONCLUSTERED (lcID ASC),
    CONSTRAINT UX_LocationHierarchy_pltID_lcHID UNIQUE CLUSTERED (lcHID ASC)
    -- add some data
    INSERT INTO dbo.LocationHierarchy
    VALUES
    (1, '/', 'A')
    ,(2, '/1/', 'B')
    ,(3, '/1/1/', 'C')
    ,(4, '/1/1/1/', 'D')
    --DROP VIEW dbo.vwLocationCodes
    GO
    CREATE VIEW dbo.vwLocationCodes
    AS
    WITH ru AS
    SELECT
    lh.lcID
    ,lh.lcCode
    ,lh.lcHID
    ,CAST('/' + lh.lcCode + '/' as varchar(8000)) as LocationPath
    -- to support recursion
    ,lh.lcHIDParent
    FROM dbo.LocationHierarchy lh
    UNION ALL
    SELECT
    ru.lcID
    ,ru.lcCode
    ,ru.lcHID
    ,CAST('/' + lh.lcCode + ru.LocationPath as varchar(8000)) as LocationPath
    ,lh.lcHIDParent
    FROM dbo.LocationHierarchy lh
    JOIN ru ON ru.lcHIDParent = lh.lcHID
    SELECT
    lh.lcID
    ,lh.lcCode
    ,lh.LocationPath
    ,lh.lcHID
    FROM ru lh
    WHERE lh.lcHIDParent IS NULL
    GO
    -- get data via view
    SELECT
    CONCAT(SPACE(l.lcHID.GetLevel() * 4), lcCode) as LocationIndented
    FROM dbo.vwLocationCodes l
    ORDER BY lcHID
    GO
    SET SHOWPLAN_XML ON
    GO
    DECLARE @lcID int = 2
    -- I believe this produces bad plan and is defect in SQL Server optimizer.
    -- variable value cardinality is 1 and SQL Server should know that. Optiomal plan is to do index seek with key lookup.
    -- This does not happen.
    SELECT lcCode FROM dbo.vwLocationCodes l WHERE l.lcID = @lcID -- bad plan
    -- this is a plan I expect.
    SELECT lcCode FROM dbo.vwLocationCodes l WHERE l.lcID = 2 -- good plan
    -- I reviewed these but I need a view here, can't be SP
    -- http://sqlblogcasts.com/blogs/tonyrogerson/archive/2008/05/17/non-recursive-common-table-expressions-performance-sucks-1-cte-self-join-cte-sub-query-inline-expansion.aspx
    -- http://social.msdn.microsoft.com/Forums/sqlserver/en-US/22d2d580-0ff8-4a9b-b0d0-e6a8345062df/issue-with-select-using-a-recursive-cte-and-parameterizing-the-query?forum=transactsql
    GO
    SET SHOWPLAN_XML OFF
    GO
    ROLLBACK
    Vladimir Moldovanenko

    Here is more... note that I am creating table Items and these can be in Locations.
    I am trying LEFT JOIN and OUTER APLLY to 'bend' query into NESTED LOOP and SEEK. There has to be nested loop, 2 rows against 4. But SQL Server fails to generate optimal plan with SEEK. Even RECOMPILE does not help
    use tempdb
    BEGIN TRAN
    -- setup definition
    CREATE TABLE dbo.LocationHierarchy(
    lcID int NOT NULL ,
    lcHID hierarchyid NOT NULL,
    lcCode nvarchar(25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
    lcHIDParent AS lcHID.GetAncestor(1) PERSISTED,
    CONSTRAINT PK_LocationHierarchy_lcID PRIMARY KEY NONCLUSTERED (lcID ASC),
    CONSTRAINT UX_LocationHierarchy_pltID_lcHID UNIQUE CLUSTERED (lcHID ASC)
    -- add some data
    INSERT INTO dbo.LocationHierarchy
    VALUES
    (1, '/', 'A')
    ,(2, '/1/', 'B')
    ,(3, '/1/1/', 'C')
    ,(4, '/1/1/1/', 'D')
    --DROP VIEW dbo.vwLocationCodes
    GO
    --DECLARE @Count int = 10;
    --WITH L0 AS (SELECT N FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) N (N))-- 10 rows
    --,L1 AS (SELECT n1.N FROM L0 n1 CROSS JOIN L0 n2) -- 100 rows
    --,L2 AS (SELECT n1.N FROM L1 n1 CROSS JOIN L1 n2) -- 10,000 rows
    --,L3 AS (SELECT n1.N FROM L2 n1 CROSS JOIN L2 n2) -- 100,000,000 rows
    --,x AS
    -- SELECT TOP (ISNULL(@Count, 0))
    -- ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) as Number
    -- FROM L3 n1
    --SELECT Number as itmID, NTILE(4)OVER(ORDER BY Number) as lcID
    --INTO dbo.Items
    --FROM x
    ----ORDER BY n1.N
    --ALTER TABLE dbo.Items ALTER COLUMN itmID INT NOT NULL
    --ALTER TABLE dbo.Items ADD CONSTRAINT PK PRIMARY KEY CLUSTERED (itmID)
    CREATE TABLE dbo.Items (itmID int NOT NULL PRIMARY KEY, lcID int NOT NULL)
    INSERT INTO dbo.items
    VALUES(1, 1)
    ,(2, 3)
    GO
    CREATE VIEW dbo.vwLocationCodes
    AS
    WITH ru AS
    SELECT
    lh.lcID
    ,lh.lcCode
    ,lh.lcHID
    ,CAST('/' + lh.lcCode + '/' as varchar(8000)) as LocationPath
    -- to support recursion
    ,lh.lcHIDParent
    FROM dbo.LocationHierarchy lh
    UNION ALL
    SELECT
    ru.lcID
    ,ru.lcCode
    ,ru.lcHID
    ,CAST('/' + lh.lcCode + ru.LocationPath as varchar(8000)) as LocationPath
    ,lh.lcHIDParent
    FROM dbo.LocationHierarchy lh
    JOIN ru ON ru.lcHIDParent = lh.lcHID
    SELECT
    lh.lcID
    ,lh.lcCode
    ,lh.LocationPath
    ,lh.lcHID
    FROM ru lh
    WHERE lh.lcHIDParent IS NULL
    GO
    -- get data via view
    SELECT
    CONCAT(SPACE(l.lcHID.GetLevel() * 4), lcCode) as LocationIndented
    FROM dbo.vwLocationCodes l
    ORDER BY lcHID
    GO
    --SET SHOWPLAN_XML ON
    GO
    DECLARE @lcID int = 2
    -- I believe this produces bad plan and is defect in SQL Server optimizer.
    -- variable value cardinality is 1 and SQL Server should know that. Optiomal plan is to do index seek with key lookup.
    -- This does not happen.
    SELECT lcCode FROM dbo.vwLocationCodes l WHERE l.lcID = @lcID-- OPTION(RECOMPILE) -- bad plan
    -- this is a plan I expect.
    SELECT lcCode FROM dbo.vwLocationCodes l WHERE l.lcID = 2 -- good plan
    SELECT *
    FROM dbo.Items itm
    LEFT JOIN dbo.vwLocationCodes l ON l.lcID = itm.lcID
    OPTION(RECOMPILE)
    SELECT *
    FROM dbo.Items itm
    OUTER APPLY
    SELECT *
    FROM dbo.vwLocationCodes l
    WHERE l.lcID = itm.lcID
    ) l
    -- I reviewed these but I need a view here, can't be SP
    -- http://sqlblogcasts.com/blogs/tonyrogerson/archive/2008/05/17/non-recursive-common-table-expressions-performance-sucks-1-cte-self-join-cte-sub-query-inline-expansion.aspx
    -- http://social.msdn.microsoft.com/Forums/sqlserver/en-US/22d2d580-0ff8-4a9b-b0d0-e6a8345062df/issue-with-select-using-a-recursive-cte-and-parameterizing-the-query?forum=transactsql
    GO
    --SET SHOWPLAN_XML OFF
    GO
    ROLLBACK
    Vladimir Moldovanenko

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • [SOLVED] nvidia, missing openGL extensions

    I am trying to get steam to work properly. Some games run but their performance sucks. Others complain about missing extensions and never start. I have gtx 770 and nvidia driver installed so this should not be the case.
    What I have installed
    $ pacman -Qs nvidia
    local/libcl 1.1-4
    OpenCL library and ICD loader from NVIDIA
    local/libvdpau 1.1-1
    Nvidia VDPAU library
    local/nvidia 346.59-1
    NVIDIA drivers for linux
    local/nvidia-utils 346.59-1
    NVIDIA drivers utilities
    $ pacman -Qs mesa
    local/glu 9.0.0-3
    Mesa OpenGL Utility library
    local/lib32-libtxc_dxtn 1.0.1-5
    S3 Texture Compression (S3TC) library for Mesa (32-bit)
    local/lib32-mesa 10.5.3-1
    an open-source implementation of the OpenGL specification (32-bit)
    local/lib32-mesa-libgl 10.5.3-1
    Mesa 3-D graphics library (32-bit)
    local/libtxc_dxtn 1.0.1-6
    S3 Texture Compression (S3TC) library for Mesa
    local/mesa 10.5.3-1
    an open-source implementation of the OpenGL specification
    local/mesa-demos 8.2.0-4
    Mesa demos and tools
    local/mesa-libgl 10.5.3-1
    Mesa 3-D graphics library
    All new processors have integrated graphics so I checked which one is used.
    $ lspci
    00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09)
    00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
    00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04)
    00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04)
    00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 (rev 04)
    00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04)
    00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4)
    00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4)
    00:1c.5 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c4)
    00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04)
    00:1f.0 ISA bridge: Intel Corporation Z77 Express Chipset LPC Controller (rev 04)
    00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
    00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04)
    01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 770] (rev a1)
    01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
    03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)
    04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
    nvidia driver was properly loaded in X and I can see in `lspci -k`
    01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 770] (rev a1)
    Subsystem: ASUSTeK Computer Inc. Device 8465
    Kernel driver in use: nvidia
    Kernel modules: nouveau, nvidia
    I have installed unigine-heaven to get some reliable information about my card's performance. Unfortunately, when I start the benchmark I get
    $ unigine-heaven
    Loading "/opt/unigine-heaven/bin/../data/heaven_4.0.cfg"...
    Loading "libGPUMonitor_x64.so"...
    Loading "libGL.so.1"...
    Loading "libopenal.so.1"...
    Set 1920x1080 fullscreen video mode
    Set 1.00 gamma value
    Unigine engine http://unigine.com/
    Binary: Linux 64bit GCC 4.4.5 Release Feb 13 2013 r11274
    Features: OpenGL OpenAL XPad360 Joystick Flash Editor
    App path: /opt/unigine-heaven/bin/
    Data path: /opt/unigine-heaven/data/
    Save path: /home/lorddidger/.Heaven/
    ---- System ----
    System: Linux 3.19.3-3-ARCH x86_64
    CPU: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz 3410MHz MMX SSE SSE2 SSE3 SSSE3 SSE41 SSE42 AVX HTT x8
    GPU: GeForce GTX 770 PCI Express 346.59 x1
    System memory: 3904 MB
    Video memory: 2048 MB
    Sync threads: 7
    Async threads: 8
    ---- MathLib ----
    Set SSE2 simd processor
    ---- Sound ----
    Renderer: OpenAL Soft
    OpenAL vendor: OpenAL Community
    OpenAL renderer: OpenAL Soft
    OpenAL version: 1.1 ALSOFT 1.16.0
    Found AL_EXT_LINEAR_DISTANCE
    Found AL_EXT_OFFSET
    Found ALC_EXT_EFX
    Found EFX Filter
    Found EFX Reverb
    Found EAX Reverb
    Found QUAD16 format
    Found 51CHN16 format
    Found 61CHN16 format
    Found 71CHN16 format
    Maximum sources: 256
    Maximum effect slots: 4
    Maximum auxiliary sends: 2
    ---- Render ----
    Renderer: NVIDIA NV70 (Kepler) 2048MB
    OpenGL vendor: VMware, Inc.
    OpenGL renderer: Gallium 0.4 on llvmpipe (LLVM 3.6, 256 bits)
    OpenGL version: 3.0 Mesa 10.5.3
    Found required GL_ARB_map_buffer_range
    Found required GL_ARB_vertex_array_object
    Found required GL_ARB_draw_instanced
    Found required GL_ARB_draw_elements_base_vertex
    Found required GL_ARB_transform_feedback
    Found required GL_ARB_half_float_vertex
    Found required GL_ARB_half_float_pixel
    Found required GL_ARB_framebuffer_object
    Found required GL_ARB_texture_multisample
    Found required GL_ARB_uniform_buffer_object
    Unigine fatal error
    GLRender::require_extension(): required extension GL_ARB_geometry_shader4 is not supported
    Shutdown
    I do not know what else should I investigate. I tried lib32-nvidia-libgl instead of mesa's but steam did not even start then.
    Any hint would be greatly appreciated.
    Last edited by crazySocket (2015-04-25 12:10:13)

    I believe I saw in steam logs
    libGL error: unable to load driver: swrast_dri.so
    libGL error: failed to load driver: swrast
    but I can see it no longer. It must be that I have this library installed.
    $ find / -mount -name '*swrast*'
    /usr/lib/xorg/modules/dri/kms_swrast_dri.so
    /usr/lib/xorg/modules/dri/swrast_dri.so
    /usr/lib32/xorg/modules/dri/kms_swrast_dri.so
    /usr/lib32/xorg/modules/dri/swrast_dri.so
    You can get it too by installing proper packages
    $ pacman -Qo /usr/lib/xorg/modules/dri/swrast_dri.so
    /usr/lib/xorg/modules/dri/swrast_dri.so belongs to mesa 10.5.3-1
    $ pacman -Qo /usr/lib32/xorg/modules/dri/swrast_dri.so
    /usr/lib32/xorg/modules/dri/swrast_dri.so belongs to lib32-mesa 10.5.3-1
    I do not know why your steam tries to load nouveau. Perhaps you have not installed the packages I mentioned in my previous post. nvidia blacklists nouveau. Make sure you have exactly the same state I have described and u will get steam, at least, starting up.

  • COVER ART DOESN'T SHOW IN COVER FLOW

    Just got the new ipod classic, 80gb.
    Some of the cover art I synced doesn't show in cover flow. Instead of the blank musical note cover, it's just black, meaning it has content, but it just won't show.
    But all the cover arts show perfectly in my itunes.
    Help please.
    Thanks.

    I've had this problem also. This problem usually happens when I update new artwork to the iPod. SOLUTION: Try deleting the entire album from the iPod, then re-transfer the whole album from iTunes with the new artwork. This solution has worked for me every time (so far). Best of luck.
    So many things about the 160GB performance suck ***, but I love the stupid thing. Hoping they fix all the bugs soon!

  • Cover Flow Doesn't show some covers

    When iTunes imports a CD on a popular album, I click on Download Cover Art, and 75% of the time it can't find a cover art.
    I usually can find the cover art easily on www.allmusic.com and transfer it into iTunes.
    When I sync the Nano, I've noticed that there are a couple of album covers that don't show in the Cover Flow, although, when I go to artist or albums, it shows up there.
    Any ideas on this?
    Thanks

    I've had this problem also. This problem usually happens when I update new artwork to the iPod. SOLUTION: Try deleting the entire album from the iPod, then re-transfer the whole album from iTunes with the new artwork. This solution has worked for me every time (so far). Best of luck.
    So many things about the 160GB performance suck ***, but I love the stupid thing. Hoping they fix all the bugs soon!

  • Do I need to have it set to Game Mode for Alchemy to wo

    I have the dreaded cracking/popping right now and for whatever reason this goes away when I put my X-Fi into either Entertainment or Audio Creation mode.
    Can I keep the settings in Entertainment mode and still have ALchemy give me EAX when I run Alchemy supported games?

    Yes you must be in gaming mode, you have to run ALchemy and add your games, then switch them to the enabled side. http://img65.imageshack.us/img65/722...2007098su2.gifBut now my game performance sucks, I had way better fps with my SB Live! 5.

  • Which Java SDK should I download? 1.3.1 or 1.4.1 or 1.4.2

    Hi all,
    newer user..
    I have always wanted to ask this.
    Which one?
    Why more than one supported?
    Is 1.3.1_8 compatible with 1.3.1_02?

    Why I asked this question is I am having trouble supporting 2 or 3 java programs that have been "compiled" under different releases.
    Cisco Works only works with M$ JVM and Suns JVM 1.3.1 (needs both) but no higher. Another program called 'something' was compiled under 1.2 and performance sucks because of something different between 1.2 and 1.3.1 .
    I wish I could recompile but that is not an option. I guess I could load 3 different JVM's but is that even possible with such a wide variety?
    Thanks for the reply.

  • I can't find an option in Firefox to disable animations on a webpages as a general rule.

    My video graphics performance sucks on this pc. I need to find an option in Firefox that, as a rule, doesn't not allow animations on webpages.
    IE had a long list of how webpages are handled, but I just can't find all of these options.
    Maybe I'm just not familiar enough with Firefox yet.
    Please help?

    hmm your sys info says
    ''Graphics
    adapterDescription: ATI Radeon 3000 Graphics
    adapterDescription2:
    adapterDeviceID: 0x9616
    adapterDeviceID2:
    adapterDrivers: aticfx64 aticfx64 aticfx32 aticfx32 atiumd64 atidxx64 atiumdag atidxx32 atiumdva atiumd6a atitmm64
    adapterDrivers2:
    adapterRAM: 256
    adapterRAM2:
    adapterVendorID: 0x1002
    adapterVendorID2:
    direct2DEnabled: False
    direct2DEnabledMessage: [u'tryNewerDriver', u'10.6']
    directWriteEnabled: False
    directWriteVersion: 6.2.9200.16571
    driverDate: 5-11-2010
    driverDate2:
    driverVersion: 8.733.0.0
    driverVersion2:
    info: {u'AzureCanvasBackend': u'skia', u'AzureFallbackCanvasBackend': u'cairo', u'AzureContentBackend': u'none'}
    isGPU2Active: False
    numAcceleratedWindows: 1
    numTotalWindows: 1
    webglRenderer: Google Inc. -- ANGLE (ATI Radeon 3000 Graphics )
    windowLayerManagerType: Direct3D 9
    You could check for upgrades for graphics drivers (and DIrectX), if that fails disable hardware accel, im not sure you can turn all animations off.
    * [[Upgrade your graphics drivers to use hardware acceleration and WebGL]]
    * [[Troubleshoot extensions, themes and hardware acceleration issues to solve common Firefox problems]]

  • OracleConnectionCacheImpl recycle unused connections?

    Hi all,
    Does anyone know if the OracleConnectioCacheImpl has any sort of mechanism that recycles unused connections when traffic decreases?
    From my tests, it seems to create connections fine up to the maximum, or past it depending on the cache scheme used, but it doesn't seem to recycle back down to the minimum or anything.

    My own experience and I understand from Metalink there is a known bug (internal) with the 8.1.7 driver that causes setMaxLimit to be a real hard limit even if you use the DYNAMIC scheme.
    ie no connections are made above the limit and it therefore will never reduce back down.
    Discovered after pulling my hair out as to why our apps performance sucked big time... try running 700 users on a pool with 3 connections :-))
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Derek Wichmann:
    When you say "take them down to the cache size that you specified," I take it that this is the limit specified with OracleConnectionCacheImpl.setMaxLimit()?<HR></BLOCKQUOTE>
    null

  • John Burkey ?. (lead architect of JavaFX ? )

    I came across this on the "JavaFX Blog" this morning.
    http://blogs.sun.com/javafx/entry/going_to_ctia
    To quote "On the agenda we have John Burkey, lead architect for JavaFX, who will be showing off latest and greatest developments in JavaFX and Hinkmond Wong will be presenting a session on Java ME as well.".
    Who is this Mr John Burkey ?. Who is driving JavaFX these days ?. Where is Chris Oliver ?. "Enquiring minds want to know". (Is this expression really yesterday ? :-) , ).
    Seriously, JavaFX has to do lot more to "engage" us, poor developers. I have seen "webinars" hosted by the likes of "Richard Bair", "Amy Fowler", "Hinkmond Wong" etc. Much thanks to Stephen Chin for making that happen. You have done "yeoman service" to this community. If Mr John Burkey is a "lead architect" and is somehow driving Prism (next-gen scenegraph ?) and other strategic initiatives, it would be really (really, really ) useful for us if he can do a "webinar" or something similar.
    The "window of opportunity" for client-side java will not be open forever.
    We need the leaders of the JavaFX team at (Sun/Oracle, Snorcle did'nt quite catch on, did it :-) ) to enlist us "foot soldiers" (the developers) in the battle. Being "closed source", "tight lipped" is'nt helping.
    JavaFX performance sucks. JavaFX composer is a "joke" . My JavaFX app takes a lot more time to start compared to it's swing equivalent.
    Where is the "flagship" app for JavaFX ?. The "Vancouver olympics" site does'nt cut-it. ( IMHO )
    There's got to be a BHAG. ( http://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal ) in the user space that should drive JavaFX development and usage. Nandini Ramani has talked about the "petstore" in the Java EE world. My favorite candidate continues to be "OpenOffice". IMHO Oracle could play the role of "shepherd", "cheer leader" or something like that as "Cloud Office" is allowed to develop steam in the "open". Oracle's collaboration efforts (beehive ... ) could be waken up from deep slumber.
    I guess, I have said enough. I am a fan of JavaFX and desperately want it to win.
    /rk

    The "webinars" by "Richard Bair", "Amy Fowler", "Hinkmond Wong", hosted at http://www.svjugfx.org/ have been "immensely usefull". On re-reading my original posting, I felt this may not have come across. Hence this "follow-up". Many, many thanks to all of them.
    Cheers ...
    /rk

Maybe you are looking for

  • How can I merge two TIFF images in one...?

    I need some help please, I am looking for a way to "resize" black & white single TIFF images. The process I need to do is like cutting a small image and paste it over a new blank letter-size image (at 300 dpi), like a template. Or better yet, is ther

  • Cant Chat With Others on Different networks

    Me and My girlfriend are able to chat between a PC and mac in my apt on my wireless network here but when we try and chat with others off the network we both have problems and errors keep being posted. I cant even do it with my brother who has anothe

  • After a updated my apple tv to OS 5

    after i updated my  apple tv to os 5 all channels are missing!!!!! youtube,flicker,etc.... what should i do now

  • Exception Errors in iRec

    Hi All, Can anybody please let me know the following How to create some custom error messages at iRec Module. The issue is whenever an external user plans to apply for a job by registering for the first time. If they encounter any issue. All they get

  • Gauges Calibration and testing

    hello all, can somebody help for sequence of process and setting required in MM/QM to -Send the Gauges for Calibration to out side vendor and receive back. -To send the material outside for (destructive)testing and receive the material back and book