Volume Query

One of our LE setups has a problem with audio on voice level. We have turned everything up but it is still too low.
Is there a Master Control that can up the audio for better balance?

... using the gain plug-in in an input object will at least help to create a file with a decent size waveform you can work with if your pre-amps are for some reason not enough. But yes, of course fuzzy normal is right, getting a good level at the pre-amp stage is the problem you should be trying to solve...
Message was edited by: ronketti

Similar Messages

  • LUKS Volume Query

    Hi,
    My laptop died in February, so I brought with me my SATA HDD with it, but was unable to access it until recently. Dual-boot system of Windows and Arch, with the encrypted being /dev/sda3...
    I went to a PC store to try to access it, a guy plugged it into a laptop, everything worked fine including password entry and access to LUKS, but software mismatch (my system was AMD and thus drivers) ended the display upon login / console.
    I received a Apricorn Inc. "DriveWire" device and plugged it in... The Windows partition showed up, then I went to boot from the device, and again, all fine (the ext2 /boot partition works fine, etc.), until...
    The device is not a LUKS volume and the crypto=parameter was not specified.
    No prompt for the password.
    I am wondering if first plugging it in to the device while in Windows (and checking the mounted Windows drive) somehow corrupted the encrypted partition /dev/mapper/root || /dev/sda3 partition, or if the hard drive needs to be directly plugged into a system (and not via a USB device booted from), or if there are bad sectors / physical damage...
    Last edited by Skyalmian (2013-03-23 19:44:55)

    Strike0 wrote:If you plug it in via USB/the adapter now, the drive is likely not sda (& sda3) anymore. So you have to adjust the kernel parameter for the luks-partition in your bootloader.
    Since it booted once in the PC store incl. LUKS open, it sounds unlikely that something is damaged.
    Yes, very thankfully correct. I found a desktop to plug the drive into and it worked fine, backed up files.
    But for future reference of using such "DriveWire" devices, what would it be, then, if not /dev/sda3? /dev/sdb3 even though it's being booted as "first in line" via boot load order?

  • Jdbc-odbc bridge connection error

    Hi,
    Please help me to create a jdbc-odbc connection from Jdev10g to a TSM server database.
    I have created a database connection with the following parameters:
    Connection name: tsm1a
    Connection type: jdbc-odbc bridge
    Username: admin
    Password: ***
    Datasource name: tsm1 (this is the name of the DSN datasource name in my Windows ODBC DSN datasource)
    Extra parameters: NONE
    Clicking on the Test button shows Success!
    I have tried to launch the SQL worksheet, it is success and give me a correct result to my "select * from volumes" query.
    I have created a simple JSP page:
    <%@ page contentType="text/html;charset=windows-1250"%>
    <%@ taglib uri="http://xmlns.oracle.com/j2ee/jsp/tld/ojsp/sqltaglib.tld"
    prefix="database"%>
    <html>
    <body>
    <database:dbOpen connId="c1" URL="jdbc:odbc:TSM1a" user="admin" password="***">
    <database:dbQuery connId="c1" output="html" queryId="q1" >
    select * from volumes
    </database:dbQuery>
    </database:dbOpen>
    </body>
    </html>
    The result of the run of it:
    javax.servlet.jsp.JspTagException: Failed to establish connection     at oracle.jsp.dbutil.tagext.dbOpenTag.doStartTag(dbOpenTag.java:115)
    Please help me, how to get a connection to the ODBC datasource from a jsp page.
    What is wrong in the URL string or elsewhere ?
    Thanks a lot in advance:
    Arpad

    Hi,
    Please help me to create a jdbc-odbc connection from Jdev10g to a TSM server database.
    I have created a database connection with the following parameters:
    Connection name: tsm1a
    Connection type: jdbc-odbc bridge
    Username: admin
    Password: ***
    Datasource name: tsm1 (this is the name of the DSN datasource name in my Windows ODBC DSN datasource)
    Extra parameters: NONE
    Clicking on the Test button shows Success!
    I have tried to launch the SQL worksheet, it is success and give me a correct result to my "select * from volumes" query.
    I have created a simple JSP page:
    <%@ page contentType="text/html;charset=windows-1250"%>
    <%@ taglib uri="http://xmlns.oracle.com/j2ee/jsp/tld/ojsp/sqltaglib.tld"
    prefix="database"%>
    <html>
    <body>
    <database:dbOpen connId="c1" URL="jdbc:odbc:TSM1a" user="admin" password="***">
    <database:dbQuery connId="c1" output="html" queryId="q1" >
    select * from volumes
    </database:dbQuery>
    </database:dbOpen>
    </body>
    </html>
    The result of the run of it:
    javax.servlet.jsp.JspTagException: Failed to establish connection     at oracle.jsp.dbutil.tagext.dbOpenTag.doStartTag(dbOpenTag.java:115)
    Please help me, how to get a connection to the ODBC datasource from a jsp page.
    What is wrong in the URL string or elsewhere ?
    Thanks a lot in advance:
    Arpad

  • Database sizing

    Hi,
    We have a production machine estimated by client for hosting an application. The configuration is
    12 CPU
    32 GB Ram
    512 GB hard disk..
    What is the maximum database size the machine can host with good performance.
    What are the maximum number of instances the machine can host with optimum performance.
    What are the maximum number of databases recommended ?
    What is the maximum size ( number of records ) in a table the machine can host and provide a reasonable response for a select * query on indexed columns with less than 10s
    We are expecting a table/ set of tables to have 3 billion records. What is the best way to host these, ( splitting into multiple drives, having of indexes in seperate drives) ?

    The answer to all these questions is going to be "it depends". It depends on data volume, query patterns, user volume, database design, I/O throughput, etc. You may be able to run a dozen databases on this machine with no problem, you may find that a single poorly designed database can bring it to its knees.
    For handling very large data sets, you'll definitely want to consider partitioning (an extra cost option for your enterprise edition license), materialized views, and other data warehousing features.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Undo Tablespace queries

    Hi All,
    I am bit confused with reading about the undo table space. so i need some clarification
    1. There are 2 undo tablespace in our application. Could someone explain how the 2 undo tablespace works? I didn't find any details about this.
    2. Both the undo tablespace is 100% utilized. Where the the new transaction will go? Will new transaction force the used blocks to be expired if needed?
    3. What should be the ideal size of undo tablespace with respect to the total storage size?
    4. is Archive log related to undotable space? if yes then When the archive logs get generated?
    Thanks

    1. In an RAC database, each Instance has to have it's own Undo Tablespace.
    In a non-RAC (single instance) database, only one Undo Tablespace may be active at any time. The other Undo Tablespace would be inactive but can be switched to with an ALTER SYSTEM SET UNDO_TABLESPACE command.
    2. Even if an Undo Tablespace is "100% used" Oracle can expire (and even drop) old extents and segments that are no longer needed by the Undo_Retention value. Thus, undo data for older transactions that have committed (more than Undo_Retention period has elapsed since the commit) will be overwritten.
    3. There's no "ideal size". Undo sizing is based on Transaction volumes, fluctuations in transaction volumes, query patterns etc.
    4. No, there is no direct relation between the two. However, all Undo that is generated also is written to Redo -- i.e. Redo captures Undo as well as it captures changes to Tables, Indexes etc.
    I suggest that you read the Oracle Concepts documentation at
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/toc.htm
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • SAP Query as extraction tool - any benchmarks 4 throughput/volume constrain

    We are looking to use SAP Query as our extraction tool to create a large dataset for downstream systems.  We also will allow the users of those systems to access our SAP system (4.6C) through SAP query for custom extracts.  We plan on controlling this by custom coding our own logical database. 
    Has anyone out there got any benchmarks/metrics/success stories on any other SAP client that might have already done this?  Our concern is that SAP Query cannot handle the volume we need to pull from the system, so we need to determine what the choking points of this solution is. 
    Please advise.
    Thanks,
    Gareth de Bruyn

    Hi Gareth,
    I already used sap query in the way you want. Additionally we added lots of abaps to the query as our extraction modules. Everything works fine. At least you will be able to do delta updates ...
    The only issue we faces right now was, while loading time dependend master data. Therefore you have to load the data to PSA first and then you can update your infoobject. But there is an oss note out for this behavior.
    regards
    Siggi

  • Volume in BEx Query

    Hello all,
       I am on BW 3.5.  I have a report that tracks material volume.  It is working fine.  However, there is one issue.  Right now, I calculate volume everytime inventory moves (volume is a key figure in the 0IC_C03 cube).  Essentially what I do is with every relevant material movement I do a lookup into the material master and get the volume.  Then I populate it into the cube and report on it.  This is working fine.
       The issue is that volume is (suprisingly) not constant.  It changes over time for the same material (go figure - that makes no sense to me, but whatever) so that the numbers are sometimes inconsistent between months.  For example, 100 units of material 123 show a volume of 100 ft3 in April and the same 100 units show a volume of only 50 ft3 in May because someone changed the volume in the R/3 material master.
       So what I need is to base all my volume calculations (both current and historical) on what is in the material master at report runtime.  It's like I need a navigational attribute for volume.  I already have a formula variable.  But while it works at the material level, it is blank if material isn't showing in the report.  Most of the users want to see things at the material group level.  I do not know a way of turning volume (a key figure) into a navigational attribute, or better yet, of making a formula variable work when the characteristic it is tied to is not showing in the report (i.e. it aggregates with report navigation).
       Can anyone help me out here?
       Thanks.
    Dave

    I could think of a alternative solution and that is use of Vitual Key Figure.
    In this case, Volume will be a virtual KF which will be populated at query run time using abap code and you can look up volume value from material master in the code.
    But use of Virtual KF will make query execution slow as the code runs for each record.
    Regards,
    Gaurav

  • Query for adding weight and volume in pdn1

    hello experts
    i have written the following wuery but it does not bring me the desired data in two columns
    --select * from pdn1 where docentry='3895'
    --select * from opdn
    SELECT
    T0.[ItemCode]
    , T1.[ItemName]
    , T2.[CardCode]
    , T2.[SuppName]
    , T0.[PriceFOB]
    , T3.[Currency] as 'Currency'
    ,T0.RATE
    ,CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END as 'u03A4u03B9u03BCu03AE u0391u03B3u03BFu03C1u03ACu03C2 u03C3u03B5 EUR'
    ,T0.[PriceAtWH]
    ,T2.DOCNUM
    ,T2.DOCDATE
    ,((T0.[PriceAtWH]-CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END )/CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END)*100 as 'Ship Cost'
    ,T1.BVOLUME as 'u038Cu03B3u03BAu03BFu03C2'
    ,T1.SWEIGHT1 as 'u0392u03ACu03C1u03BFu03C2'
    ,T5.volume as 'u038Cu03B3u03BAu03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFu03CD'
    ,T5.weight1 as 'u0392u03ACu03C1u03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFu03CD'
    FROM
    dbo.IPF1 T0 
    INNER JOIN dbo.OITM T1 ON T0.ItemCode = T1.ItemCode
    INNER JOIN dbo.OIPF T2 ON T0.DocEntry = T2.DocEntry
    INNER JOIN dbo.OCRD T3 ON T0.CardCode = T3.CardCode
    INNER JOIN dbo.OPDN T4 ON  T4.DOCENTRY=T2.DOCENTRY
    INNER JOIN DBO.PDN1 T5 ON T5.DOCENTRY=T4.DOCENTRY
    WHERE
    --(T2.[DocDate]>='2011-08-01'and T2.[DocDate]<='2011-08-31')
    (T2.[DocDate]>='[%0]' or '[%0]'='') and (T2.[DocDate]<='[%1]' or '[%1]'='')
    And (T1.itemCODE='[%2]' or '[%2]'='')
    --And T2.CARDCODE='80763'
    And (T2.CARDCODE='[%3]' or '[%3]'='')
    i actually want to bring the data stored in pdn1 table for the following two lines
    ,T5.volume as 'Όu03B3u03BAu03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFύ'
    ,T5.weight1 as 'u0392άu03C1u03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFύ'
    do you have any idea?

    Hi.......
    Your query works perfectly.
    But if you want only those details of PDN where Volume is greater than zero then please try below report......
    SELECT
    T0.[ItemCode]
    , T1.[ItemName]
    , T2.[CardCode]
    , T2.[SuppName]
    , T0.[PriceFOB]
    , T3.[Currency] as 'Currency'
    ,T0.RATE
    ,CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END as 'u03A4u03B9u03BCu03AE u0391u03B3u03BFu03C1u03ACu03C2 u03C3u03B5 EUR'
    ,T0.[PriceAtWH]
    ,T2.DOCNUM
    ,T2.DOCDATE
    ,((T0.[PriceAtWH]-CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END )/CASE WHEN T0.RATE!='0' THEN T0.[PriceFOB]*(1/T0.RATE) ELSE T0.[PriceFOB] END)*100 as 'Ship Cost'
    ,T1.BVOLUME as 'u038Cu03B3u03BAu03BFu03C2'
    ,T1.SWEIGHT1 as 'u0392u03ACu03C1u03BFu03C2'
    ,T5.volume as 'u038Cu03B3u03BAu03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFu03CD'
    ,T5.weight1 as 'u0392u03ACu03C1u03BFu03C2 u03A0u03B1u03C1u03B1u03C3u03C4u03B1u03C4u03B9u03BAu03BFu03CD'
    FROM
    dbo.IPF1 T0 
    INNER JOIN dbo.OITM T1 ON T0.ItemCode = T1.ItemCode
    INNER JOIN dbo.OIPF T2 ON T0.DocEntry = T2.DocEntry
    INNER JOIN dbo.OCRD T3 ON T0.CardCode = T3.CardCode
    INNER JOIN dbo.OPDN T4 ON  T4.DOCENTRY=T2.DOCENTRY
    INNER JOIN DBO.PDN1 T5 ON T5.DOCENTRY=T4.DOCENTRY
    WHERE
    --(T2.[DocDate]>='2011-08-01'and T2.[DocDate]<='2011-08-31')
    (T2.[DocDate]>='[%0]' or '[%0]'='') and (T2.[DocDate]<='[%1]' or '[%1]'='')
    And (T1.itemCODE='[%2]' or '[%2]'='')
    --And T2.CARDCODE='80763'
    And (T2.CARDCODE='[%3]' or '[%3]'='')
    And T5.volume >0
    Regards,
    Rahul

  • Tech Tool Deluxe Volume Structure test query

    Hi,
    Sorry if this is elsewhere, but my searches have not supplied a definitive answer.
    As part of my regular maintenance routine on my iMac (mid 2010 i7 quad core) that I purchased just before Christmas, I have run the AppleCare Tech Tool Deluxe scans at regular monthly intervals with no problems, until about a month ago.
    When the program does it's stuff, all tests pass, except the volume structure one (in the window showing the image of the volume structure, it says fail! above the "icon"). The final report that Tech Tool Deluxe comes out with says everything has passed. What's going on?
    Because of this, I''ve also done a surface scan in the program and it was OK, as was verifying the disk in Disk Utility from the Snow Leopard DVD. I've repaired permissions and even reformatted the Macintosh HD and done a complete reinstall of the OS (not from backup), all of which come up with the same result in Tech Tool Deluxe - volume structure fails, but final report passes.
    The version of Tech Tool Deluxe I have installed from the AppleCare disc is 3.1.3, which according to Micromat is the current one, but this is from 2009.
    So my question is this:
    Has the updated Snow Leopard (10.6.6) changed it's volume structure slightly meaning that Tech Tool Deluxe thinks there's a problem as it's scanning, but then ignores it for the final report, or is my HDD failing?
    I've not noticed any issues whatsoever with the iMac and I have both Time Machine and bootable SuperDuper backups, so I'm not worried about my data.
    Would you guys recommend ignoring it for now, or should I get Diskwarrior to check it out further?
    Thanks for any help, or light that can be shed on this issue.

    Hi Den,
    Thanks for this, I emailed them and they are giving the usual story of "this product was developed for Apple, so you'll need to speak to them".
    I've bit the bullet and purchased DiskWarrior 4.3 and by installing it onto my SuperDuper backup drive, I was able to check out the iMac's internal HD.
    DiskWarrior was able to find errors in file structure and permissions etc (it didn't give details) and was able to repair them. After rebooting into the internal HDD, I ran TechTool Deluxe 3.1.3 again, and this time it said that it failed the volume structure during the test and then in the final report. It tried to get me to download the repair version of the software, but the web page that opened had nothing to do with TechTool's download. As a little self test, I ignored it and ran the test again. This time, it failed during the test (as normal) and then said that it passed in the final report.
    I have therefore come to the conclusion that if Disk Utility AND DiskWarrior say that it's fine, then the issue must be with TechTool Deluxe. At least it doesn't seem to be an issue with the actual drive.
    I admit that I was getting suspicious of TechTool when a fresh install on a reformatted HDD said that the volume structure failed and then passed. There must be something different about the current version of Snow Leopard comared to that at the time of release of version 3.1.3 of Tech Tool Deluxe.
    Thanks for all your help guys.
    Obviously, if someone can prove me wrong, that would help me ascertain as to what is going on.
    Message was edited by: pdscott

  • Unix data volume filesystem number query

    Hi,
    Does anyone have a definitive answer as to whether more than 1 data filesystem is supported in MaxDB 7.8 on Unix? 
    I cant easily find an answer in any of the guides or on Marketplace.  I'm presuming yes and it should be no problem, certainly in my experience on other sites there are always sapdata1, 2, 3 etc. but this is my first MaxDB site and there is only 1 sapdata FS here. We need to extend the data area but the volume group is restricted in size so we are looking at all possible options (I already know what the other options are, so I just want to know about a secondary data FS).
    If someone can point me to the official SAP answer to this (or provide one) that would be much appreciated.
    Thanks,
    Chaz.

    Hi Chaz,
    Does anyone have a definitive answer as to whether more than 1 data filesystem is supported in MaxDB 7.8 on Unix?
    Answer is Yes. I have already configured SAP systems with sapdata1, sapdata2, sapdata3 and sapdata4 on MaxDB On Unix.
    I have done it for Livecache database.
    I cant easily find an answer in any of the guides or on Marketplace. I'm presuming yes and it should be no problem, certainly in my experience on other sites there are always sapdata1, 2, 3 etc. but this is my first MaxDB site and there is only 1 sapdata FS here. We need to extend the data area but the volume group is restricted in size so we are looking at all possible options (I already know what the other options are, so I just want to know about a secondary data FS).
    When you install SAP on MaxDB database default it shows only single sapdata with its size. On the same screen there is an option to add another sapdata with similar or higher size. Path of each sapdata<X> can be changed in the same screen.
    For eg. On the OS you have 4 file systems sapdata1, sapdata2, sapdata3, sapdata4.
    While Installation I may create 4 sapdata<x> and distribute the same as shown below.
    /sapdb/<SID>/sapdata1                      2000MB
    /sapdb/<SID>/sapdata2                      2000MB
    /sapdb/<SID>/sapdata3                      2000MB
    /sapdb/<SID>/sapdata4                      2000MB
    Similarly I can create a separate filesystem for log
    /sapdb/<SID>/saplog                        2000MB
    Note: You may not find such information written in SAP notes as there is no standard definition like how many sapdata<x> you may have in your system. SAP installation process expects minimum 1 sapdata<x> and 1 saplog<x> partitions for installation to complete in MaxDB.
    Regards,
    Deepak Kori

  • [Q] Backup Sections, Backup Images, Volume Sets: how to query relationships

    Greetings.
    How can I list the following:
    Which Backup Sections make up a given Backup Image?
    Which Volume Set makes up a given Backup Image?
    Thanks,
    Jeff

    You can start with "obtool lsvol -c" and progress with "obtool lssection"
    You can find all these described in the OSB Reference Guide.
    Thanks
    Rich

  • Cleaning large query volumes

    We are currently trying to get rid of a large amount of unused BW queries in our system. A department wishes to put a range of queries into quarenteen to see the impact on the business.
    Is it possiple to move access to a selection of queries without actually deleting them?

    Do you have BW Statistics installed and switched on?
    This is the easiest way to determine the actual use of queries, but beware: some reports are designed perhaps to be run once per year, and so you need to have a good publicity in your organisation to avoid repercussions.
    What should have happened what that your system's architect should have created naming conventions supported by authorizations. In that way you would have no difficulty identifying reports that were "test" reports not executed in the last 6-12 months. That's what I always set up.
    In any case, the way to proceed would be to identify reports by the last execution date.
    Once you have a subset of all queries identify the person who created them (or in the case where your business doesn't create them) identify who uses them, and email them the list.
    Ask them to confirm which ones they use with a deadline to reply.
    Follow it up if you don't get a response using polite emails "Four weeks until these queries get get removed - please confirm which ones to keep" Two weeks... One Week... 3days... 2 days..1day...
    Then delete them. They might complain but they were given fair warning, and as long as your management stand by this process it's very hard to argue that they weren't informed.
    I worked for a very large international organisation, we had to rebuild maybe three of the 2000 odd queries that were deleted, because the user did not react to emails. No big deal in the end and we got a nice clean system.
    Edited by: Christopher DCosta on Mar 24, 2011 4:53 PM

  • Help to query long volum in all_views

    Hi Gurus,
    background :
    After a upgrade from 8.1.7 to 9.0.2 we found that some custom views have ',,' in them, but they are still showing in the 'VALID' status.
    Problem:
    In order to find all bad views I am trying to run something like
    SQL/> select view_name from all_views where text like '%,,%'
    but it fails with
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected NUMBER got LONG
    I tried running some thing like
    set serveroutput on size 10000
    declare
    cursor c_dbv is
    select owner,view_name,text
    from all_views ;
    searchstring varchar2(100) := ',,';
    begin
    for ct in c_dbv loop
    dbms_output.put_line(ct.text);
    if instr(lower(ct.text),lower(searchstring)) > 0 then
    dbms_output.put_line(ct.owner||'.'||ct.view_name);
    else
    dbms_output.put_line ('No match' );
    end if;
    end loop;
    end;
    it is also not helping ; could some one pleas suggest an alternative way
    Regards

    SQL> create or replace view my_test_view
      2  as
      3  select ',,' c from dual
      4  /
    View created.
    SQL> declare
      2     cursor c
      3     is
      4     select view_name,text
      5       from all_views;
      6  begin
      7     for i in c
      8     loop
      9             if instr(i.text,',,') > 0
    10             then
    11                     dbms_output.put_line(i.view_name);
    12             end if;
    13     end loop;
    14  end;
    15  /
    MY_TEST_VIEW
    PL/SQL procedure successfully completed.

  • Query Help Please

    Hi... having problems with a query.  Any assistance would be
    much appreciated.
    Two queries with identical columns: Villages_Query_1 and Villages_Query_2.
    Both have these columns: Village_ID, Village_Name, Player_ID.
    I need to find all records in Villages_Query_2 where the Village_ID's match but the Player_ID's have changed.
    Example Village_Query_1
    Village_ID
    Village_Name
    Player_ID
    1
    Houston
    1
    2
    Dallas
    2
    3
    Chicago
    3
    Example Village_Query_2
    Village_ID
    Village_Name
    Player_ID
    1
    Houston
    1
    2
    Phoenix
    4
    3
    Chicago
    3
    4
    New York
    5
    In this case, Village_ID = 2, has changed names (Dallas to Phoenix) and the Player_ID has changed (2 to 4).  In addition, a new record was added.
    The eventual output I need is to be able to report the following:
    Player 2 village "Dallas" was taken by Player 4 and renamed "Phoenix".
    New York is a new village owned by Player 5.
    How the heck do I do this??  I have been trying query after query... reading about query of queries and JOINS and and and... I am now completely confused.
    Help appreciated.
    Mark

    Well... firstly... you do not use MS Access for that volume of data.  Plain and simple.  MS Access is for DBs like "My CD collection".  It's a desktop application, and is not intended to be used other than as a desktop application.
    Part of the reason for it not being appropriate for the job is that it can't do things like bulk loading data, which is kinda what you're wanting to do here.  That aside, it's a single-user file-based DB which is simply not designed to work as the back-end for a web application (or any sort of serious application).
    Anyway, I would approach this by putting all the data from the CSV files into the DB as is.  Then on the DB run a query which gets all your changes.  You're really going to struggle with the suggestions here to use valueList() to generate a list that is then used for a NOT IN(#list here#), because you're likely to have a mighty long list there.  Even proper DBs like Oracle only allow 2000 entries in a list like that (SQL Server is about the same, from memory), so I doubt QoQ will allow even that.  The reason the DBs put limits on these things is that doing a WHERE IN (#list#) is a really poorly-performing process.
    If you've got all your data in the DB, then your query becomes pretty easy, and I'm sure even Access could cope with it.  it'd be something like this:
    SELECT VB.village_id, VB.village_name AS village_old_name, VB.player_id AS player_old_id,
    VU.village_id AS village_new_id, VU.village_name AS village_new_name, VU.player_id as player_new_id
    FROM villages_base VB
    RIGHT OUTER JOIN villages_updates VU
    ON VB.village_id = VU.village_id
    WHERE VB.village_name != VU.village_name
    (that's untested and I only gave it about 1min thought before typing it in, so don't quote me on that!)
    Where VILLAGE_BASE is your original data, and VILLAGE_UPDATES is the data that indicates the changes.  I'm kinda guessing that this is the sort of thing you want.  Note: the "new" villages will be the ones which have NULLs for the village_id, village_old_name and player_old_id.
    Getting all the data into the DB is going to be a matter of looping over the CSV file and doing an INSERT for each row.  And that will take as long as it takes, so you might need to get some control over your request timeouts.  However doing these inserts will take less time than all the QoQ logic suggested before, so you might be OK.  And the query should be quick.
    What happens to the data once the report is written?  Does the "updated" data become the "live" data?  If so, after you run your report you're gonna want to do something like a TRUNCATE on villages_base, and INSERT all the records from villages_update into it (then TRUNCATE villages_update, ready for the next time you need to run this process).  Although don't take my word for it here, as I'm guessing your requirement here ;-)
    Adam

  • Query Scenario Help - Average of Multiple Regions

    Hi, I have a query scenario I can't create and I hope someone can help with.
    The user inputs a sales Area and the report is to output the regions in that area and provide the Volume (easy) and the Average of all the presented regions.  The problem is that the number of regions is dynamic so I don't know how to go about determining the average and making it constant across the regions.  The user can also drill down on a region to present divisions and I would then need the average for the divisions this time instead of regions.  Can you help?
    Example 1
    Input Sales Area 1000000
    Region               1000001  1000002  1000003
    Product1  Volume       100      120      105
    Product1  Average      108      108      108
    Product2  Volume       200      400      325
    Product2  Average      308      308      308
    Example 2
    Input Sales Area 2000000
    Region               2000001  2000002  2000003  2000004
    Product1  Volume       100      120      105      180
    Product1  Average      126      126      126      126     
    Product2  Volume       200      400      325      600
    Product2  Average      381      381      381      381
    To get this layout I have the following in the query builder:
    The 2 Products are in a structue under Rows
    The Key figures structure is also under Rows
    The Region is under the columns

    If you redesign the report slightly to move the "Average" to a column at the end of each row (which really makes more sense, IMHO), then the answer is easy. Just change the Suppress Results Row property to "Never" for Region, and change the Display Results As property to "Average" for Volume.
    You could also make this same change to Division and/or any other free characteristic they might use as a drill-across in the report.
    Hope this helps...
    Bob

Maybe you are looking for

  • Purchase Requestion Number

    Hi, How to find a purchase requisition already partially ordered according to the filter on plant, purchasing group, release date. EBAN is the table for Purchase Requisition. -Thanks

  • How system debit and credit gl accounts in integration entries

    Hi Friends, I hav little bit confusion in integration for ex; take while goods receipt here the entry will be inv. rawmaterial a/c --dr    to gr/ir clearing a/c as per my knowledge with the help of movement type system wll debit n credit the accounts

  • Anyone else missing their preferences!

    Just installed iLife 08 yesterday and noticed that in iPhoto that the preferences do not appear when selected from the menu. Doesn't work usking ks as well. I just spent two hours on the phone with apple and they seem to be stumped as well. I have se

  • Why does the windows pop up in adding add-ons mean and how to get rid of it

    This forum really really needs to be able to accept an image with the request. So here I go making an images workaround because the forum is defective [[Image:What does this mean]]

  • RAM installation

    http://www.amazon.com/Corsair-DDR3-Laptop-Memory-CMSO8GX3M2A1333C9/dp/tech-data/ B002YUF8ZG/ref=de_a_smtd Just attempted to install this RAM in an April 2010 MacBook Pro 13 inch.  The Apple comes on the screen during boot but it locks up.  Not succes