Defining more parameters than a query needs

Hi.
When I run the following code, I get the error "ORA-01036: illegal variable name/number" .
The problem is that I currently define more parameters than the query needs. Thats because I don't know how many bind variables the query uses, and I would not like to parse the query ...
I don't understand why I have to define exactly the same number of parameters, and in the exact order ... It doesn't make sense. As bind variables have names, there should be no problem passing more parameters or parameters in a different order: the binding should be done by name...
I'm currently using Oracle10g, ODP.NET and .NET Framework 2.0.
I would appreciate any help ...
Thank you.
Ricardo Coimbras
===== BEGIN VB.NET CODE =====
Sub Execute_Query(ByVal SqlString as String)
Dim ObjCmd As OracleCommand
Dim DataAdap As OracleDataAdapter
Dim outDsCorpo As DataSet
Dim Constroi_Conn_String_Oracle As String
Dim mObjConnOracle As OracleConnection
Constroi_Conn_String_Oracle = "User ID=uuu" & _
";Password=ppp" & _
";Data Source=bd" & _
";Pooling=false"
mObjConnOracle = New OracleConnection(Constroi_Conn_String_Oracle)
mObjConnOracle.Open()
ObjCmd = mObjConnOracle.CreateCommand()
ObjCmd.CommandType = CommandType.Text
ObjCmd.CommandText = SqlString
ObjCmd.Parameters.Add("p1", OracleDbType.Char, 3, "001", ParameterDirection.Input)
ObjCmd.Parameters.Add("p2", OracleDbType.Char, 3, "001", ParameterDirection.Input)
DataAdap = New OracleDataAdapter(ObjCmd)
outDsCorpo = New DataSet()
DataAdap.Fill(outDsCorpo)
GridView1.DataSource = outDsCorpo
GridView1.DataBind()
DataAdap = Nothing
ObjCmd = Nothing
mObjConnOracle.Close()
mObjConnOracle.Dispose()
mObjConnOracle = Nothing
End Sub
Execute_Query "select * from map.t_mapa_def where mapa_def_cod = :p1 order by mapa_def_cod"
===== END VB.NET CODE =====

Hi,
BindByPosition is the default behavior as per the docs, and you can change that by setting cmd.BindByName=true which adds a little bit of extra overhead.
That wont fix the error seen when binding a random number of parameters to the statement though. You need to bind the correct number and types of parameters.
Greg

Similar Messages

  • SYS_REFCURSOR takes more time than direct query execution

    I have a stored proc which has 4 inputs and 10 output and all outputs are sys_refcursor type.
    Among 10 ouputs, 1 cursor returns 4k+ records and all other cursors has 3 or 4 records and average 5 columns in each cursors. For this, it takes 8 sec to complete the execution. If we directly query, it gives output in .025 sec.
    I verified code located the issue with cursor which returns 4k+ only.
    The cursor opening from a temporary table (which has 4k+ records ) without any filter. The query which inserted into temporary is direct inserts only and i found nothing to modify there.
    Can anyone suggest, how we can bring the results in less than 3 sec? This is really a challenge since the code needs to go live next week.
    Any help appreciated.
    Thanks
    Renjish

    I've just repeated the test in SQL*Plus on my test database.
    Both the ref cursor and direct SQL took 4.75 seconds.
    However, that time is not the time to execute the SQL statement, but the time it took SQL*Plus in my command window to print out the 3999 rows of results.
    SQL> create or replace PROCEDURE TEST_PROC (O_OUTPUT OUT SYS_REFCURSOR) is
      2  BEGIN
      3    OPEN  O_OUTPUT FOR
      4      select 11 plan_num, 22  loc_num, 'aaa' loc_nm from dual connect by level < 4000;
      5  end;
      6  /
    Procedure created.
    SQL> set timing on
    SQL> set linesize 1000
    SQL> set serverout on
    SQL> var o_output refcursor;
    SQL> exec test_proc(:o_output);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.04
    SQL> print o_output;
      PLAN_NUM    LOC_NUM LOC
            11         22 aaa
            11         22 aaa
            11         22 aaa
            11         22 aaa
            11         22 aaa
    3999 rows selected.
    Elapsed: 00:00:04.75
    SQL> select 11 plan_num, 22  loc_num, 'aaa' loc_nm from dual connect by level < 4000;
      PLAN_NUM    LOC_NUM LOC
            11         22 aaa
            11         22 aaa
            11         22 aaa
            11         22 aaa
            11         22 aaa
            11         22 aaa
    3999 rows selected.
    Elapsed: 00:00:04.75
    That's the result I expect to see, both taking the same amount of time to do the same thing.
    Please demonstrate how you are running it and getting different results.

  • Return value from database function taking a lot more time than the query

    Hi guys,
    I have a Query that does a call to a database function. The function takes in a few parameters and returns a Date. Now, the query within the function takes barely .05 seconds. However, doing a select get_join_dates from dual is taking almost 6 seconds for each call.
    Here is the Query:
    select s.student_id, s.student_name, s.organization_code
    from   student s
    where  s.student_id = :p_student_id
    and    s.student_enrollment_date = get_join_dates( :p_year,
                                                       :p_month,
                                                       :p_student_id,
                                                       s.organization_code );And here is the database function. The select inside this function barely takes 0.05 seconds per call. This function gets called 3 times in my case as there are 3 records in the org_body table for this student.
    create or replace function
    get_join_dates( p_yyyy in org_body.fiscal_yyyy%type,
                           p_month in  org_body.fiscal_mm%type,
                           p_student_id in student.student_id%type,
                           p_organization_code in org_body.organization_code%type) return date as
        t_enrollment_date  date;
        cursor cur_latest_enrollment_date is
          select max(enrollment_date)
          from   org_body
          where  fiscal_yyyy = p_yyyy
          and    fiscal_mm = p_month
          and    student_id = p_student_id
          and    organization_code = p_organization_code;
      BEGIN
        open cur_latest_enrollment_date ;
        fetch cur_latest_enrollment_date into t_enrollment_date;
        close cur_latest_enrollment_date ;
        return t_enrollment_date;
      exception
        when others then
          null;
    end;owever, when I run the following statement below, it takes close to 6 seconds to retrieve a record. In turn, my Query is becoming really slow and taking almost 35 seconds. Imagine that with more records.
    select get_join_dates( 2010, '01', '2167543', 'PSYCH01' ) from dual;If I run my query with this condition below, it takes 0.5 seconds.
    select s.student_id, s.student_name, s.organization_code
    from   student s
    where  s.student_id = :p_student_id
    and      s.student_enrollment_date = '01-JAN-10'Any ideas would be greatly appreciated.

    Any reason why you are doing this with the stored function?
    You could just do this with SQL. Embed the query in the function as a subquery in your initial query from STUDENT.
    select s.student_id, s.student_name, s.organization_code
    from   student s
    where  s.student_id = :p_student_id
    and    s.student_enrollment_date =
    (select max(enrollment_date)
          from   org_body
          where  fiscal_yyyy = :p_year
          and    fiscal_mm = :p_month
          and    student_id = s.student_id
          and    organization_code = s.organization_code);Why your function is not performing: I cannot say that with the information you have provided.
    Maybe sqltrace a call and see what the reason is.

  • JDBC: Prepared statements with more parameters than column names

    I'm using the latest version of the JDBC driver - 4.1.5605.100_enu - on Java 1.7, Linux.
    I'm connecting to MS SQL Server 2012 Express Edition using a connection URL of the form jdbc:sqlserver://10.0.0.2;user=username;password=pwd;database=testdb1
    I have a table with two columns. One is an ID (type bigint) and one is numeric(38, 19).
    The following code works exactly as expected:
    PreparedStatement stm = connection.prepareStatement("INSERT INTO myTable(id, num) VALUES (?, ?)")
    // repeatedly set parameters using setLong, setBigDecimal, then addBatch
    stm.executeBatch()
    The following code does not work as expected:
    PreparedStatement stm = connection.prepareStatement("INSERT INTO myTable(id, num) VALUES (?, ?),  (?, ?)");
    stm.setLong(1, 1);
    stm.setBigDecimal(2, new BigDecimal("1.234"));
    stm.setLong(3, 2);
    stm.setBigDecimal(4, new BigDecimal("1.234"));
    stm.addBatch();
    stm.executeBatch();
    The code runs normally in the second case, but the second row inserted contains the wrong value in the "num" column - it's been rounded to 1.0 instead of stored as 1.234.
    I think this may be because the driver does not understand the types of parameters whose indexes are beyond the number of columns in the insert statement. Running the following code on the second prepared statement:
    System.err.println(stm.getParameterMetaData().getParameterTypeName(2)); // prints "numeric"
    System.err.println(stm.getParameterMetaData().getParameterTypeName(4)); // fails with IndexOutOfBoundsException
    As far as I can tell from the JDBC JavaDoc, this usage is valid and ought to work. Certainly it works as expected (including the parameter metadata) using PostgreSQL and their JDBC driver. Is this a bug in Microsoft's driver?

    Hi dtn-cfl,
    Thanks for your waiting.
    Based on my research(using
    SQL Server Profiler to trace DB events), the preparedStatement finally passes the below statements to SQL Server.
    stm.setLong(1, 1);
    stm.setBigDecimal(2, new BigDecimal("1.234"));
    stm.setLong(3, 2);
    stm.setNull(4, Types.DECIMAL);
    declare @p1 int
    set @p1=0
    exec sp_prepexec @p1 output,N'@P0 bigint,@P1 decimal(38,3),@P2 bigint,@P3 decimal(38,0)',N'INSERT INTO myTable(id, num) VALUES (@P0, @P1), (@P2, @P3) ',1,1.234,2,NULL
    select @p1
    Pay attention to the @P3 decimal(38,0),  it seems(I don't have access to JDBC source code so that I have to use seem) that the
    stm.setNull(4, Types.DECIMAL) will finally parsed as a type decimal(38,0). In SQL Server, to a
    decimal datatype,one with smaller scale has a higher precedence. To understand the precedence, please see below code. If you have more interest in data type precedence, you can click
    here.
    declare @num1 decimal(38,3) --scale 3
    declare @num2 decimal(38,2) --scale 2
    set @num1 = 3.225
    set @num2 = 3.22
    select @num1 as num
    union all
    select @num2
    output
    num
    3.23
    3.22
    @num1 get rounded to keep the column data type consistency, namely keep the column as type decimal(38,2)
    Let's go back to your code, if you would like to make your code work properly, please see below.
    stm.setLong(1, 1);
    stm.setBigDecimal(2, new BigDecimal("1.234"));
    stm.setLong(3, 2);
    stm.setNull(4, Types.INTEGER);As per the above data type precedence link, a decimal has a higher precedence than integer, so your decimal will not get rounded.
    Not only the case in your post, but also any data type inconsistency will lead to the rounding problem. See below.
    stm.setLong(1, 1);
    stm.setBigDecimal(2, new BigDecimal("1.234"));
    stm.setLong(3, 2);
    stm.setBigDecimal(4, new BigDecimal("1.2")); // or new BigDecimal("1.23") and any other decimal with different scale leads to rounding problem.
    So when you set parameters for a prepareStatement like "INSERT INTO myTable(id, num) VALUES (?, ?),  (?, ?)" with more than one row, you should pay attention to data type consistency fact.
    The Microsoft JDBC driver for SQL Server may not be that intelligent, however we can't say that is a bug definitely.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • I need to transfer my library from my old computer to my new computer (both PCs.)  How is this done?  The process seems to be more opaque than it really needs to be.  Thank you!!

    My library (which includes items from CDs and items purchased from itunes) is on my old PC.  I have a new PC and want to transfer the library.  I can't see how to do this.  I've turned Home Sharing on for both computers, but the Share doesn't show up on either one.  I can't figure out how to move the library.  When I try to burn it on a CD it keeps saying the CD isn't blank, even tho it is.
    Any advice would be greatly appreciated.  Thank you!

    O.,.k. Maybe this is progress. I transfered the Itunes folder to an external hard drive, connected that with the new computer, and told Itunes to look for the library there. It found the library.
    The playlists are not there, but that's not a big deal.
    The songs I tried to transfer to the new computer are still there, with the "!" tag, meaning they won't play. No big deal. I deleted all the "!" because the songs were there in files without the "!".
    Final problem, I hope.
    I ended up with hundreds of duplicate copies. Both dupes play. Again, no big deal other than the annoyance of having to delete the dupes.
    Next step, after I have completed the deletion of the dupes.
    1. Recreate the play lists manually, using my Ipod as the guide.
    2. Then sync the Ipod with the new computer library, and all will be will.
    Am I right? Or is there possible trouble?

  • Web Browser Session Hangs When Opening More Than One Query

    Let me start off with the BW environment - we are on BW release 3.50 at support pack level 12. Basis component is release 6.40 at level 12.
    We have a situation where a web browser HTTP session "hangs" while trying to execute more than one query at a time. A role based menu template is used by our end-users to run queries. The end-users execute a query and while it is running, try to execute a second query. But the web browser for the second query does not become available until the first query is complete.
    The following is the detailed steps of the process:
    1. From my user menu in BW, expand the folder for Role Z: BW_REPORTING and click on SAP_BW_TEMPLATE – Web_Role_Menu. Web browser window will open.
    2. Within the ZTPL_Role_Menu_Reporting web page, select Role Z: FUNC_DEVELOPER > BW Reporting > Controlling > Profitability Analysis > Detail query. (Role Z: FUNC_TESTER can be used in this example as well, just follow the same menu path).  Click on Detail and another web browser window will open for the query selection screen.
    3. On the selection screen for the CO-PA Detail query, enter values and execute the query.
    4. While the CO-PA Detail query is executing, go back to the first opened web browser for ZTPL_Role_Menu_Reporting and go to Role Z: FUNC_DEVELOPER > BW Reporting > Controlling > Profitability Analysis > Operating Concern query. Run the CO-PA Operating Concern query while the CO-PA Detail query is still executing. The session for the Operating Concern “hangs” and does not display the selection screen until the CO-PA Detail query has completed. Once the Detail query has finished, then the second query, CO-PA Operating Concern will then display the selection screen for the query.
    Our end-users expect the second query to display the selection screen immediately with no delay. The end-users do not want to wait for the selection screen of the second query. They want to be able to execute the queries simultaneously.
    Our Basis group as tried adjusting some parameters and has implemented OSS note 853396, but the issue still exists.
    Is it a BW issue or is it an issue with MS Internet Explorer?
    Your input and suggestions are appreciated.
    Thanks.

    Hi.
    I had a somewhat similar issue earlier when I tried closing BW Query windows, it would freeze and wouldn't close. I resolved the issue by turning off some BHO (Browser help Objects) in IE.It can be done using IE>Tools>Manage Add-Ons
    Cheers
    Anand

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Issue with opening a site in SharePoint Designer 2010 -The version of SharePoint foundation running on the server is more recent than the version of SPD you are using, you need a more recent version of SPD.

    I have a SharePoint site which I am trying to open in SPD 2010, I am getting the following error(some of my team members are able to open)
    The version of SharePoint foundation running on the server is more recent than the version  of SPD you are using, you need a more recent version of SPD.
    These are the ways I tried
    1) Earlier I have MS office 32 bit and SPD 32 bit, I uninstalled them and installed both 64 bit versions
    2) I uninstalled restarted my machine and installed it again still no use.
    3) I installed SPD 2010 Service Pack 2 of 64 bit
    4) I uninstalled the SPD 2010 and opened the site—site actions—edit page in SPD , then it asked me to install SPD 2010, then I installed and tried to open the same site, again same error.
    How to solve this issue?
    I checked permissions also at that site level – I have full control like others. I don’t have access to Central Admin so where else I can check the permission settings?

    Please update us SharePoint version 
    Regards,
    Pratik Vyas | SharePoint Consultant |
    http://sharepointpratik.blogspot.com
    Posting is provided AS IS with no warranties, and confers no rights
    Please remember to click Mark As Answer if a post solves your problem or
    Vote As Helpful if it was useful.

  • Help!!!!! I need to get iphoto lib off of hd which is running really slowly, and clicks rather more loudly than usual, how do i do this, im even on my iphone asking this!

    Help!!!!! I need to get iphoto lib off of hd which is running really slowly, and clicks rather more loudly than usual, how do i do this, im even on my iphone asking this! Computer also runs inly marginally faster in safe boot mode! Could the apple store help?

    It certainly sounds like your harddrive is dying.  I would Immediately backup your harddrive to an external USB or Firewire harddrive Before doing anything else.  It just may save your data in time before you lose it all.
    Hope this helps

  • Does anyone know of a simple home accounting application? iBank does not seem to be that user friendly and has more features than I need. I would like something that will connect to my bank and will allow the creation of a budget that is simple.

    I am looking for a very simple home accounting application. iBank seems to not be very intuitive and it has more features than I need. I am looking for the ability to create a budget and keep a bank register with the ability to connect to my bank and do updates.

    I have a small business, I chose Quickbooks because anything else cost much more. That triggered the hiring of a bookkeeper who knew how to use it. It was not the cheapest choice after all.

  • I need to pass a query in form of string to DBMS_XMLQUERY.GETXML package...the parameters to the query are date and varchar ..please help me..

    I need to pass a query in form of string to DBMS_XMLQUERY.GETXML package...the parameters to the query are date and varchar ..please help me build the string .Below is the query and the out put. ( the string is building fine except the parameters are with out quotes)
    here is the procedure
    create or replace
    procedure temp(
        P_MTR_ID VARCHAR2,
        P_FROM_DATE    IN DATE ,
        P_THROUGH_DATE IN DATE ) AS
        L_XML CLOB;
        l_query VARCHAR2(2000);
    BEGIN
    l_query:=  'SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
          ||'mi''), ''00:'
          ||'00'',''24:'
          ||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
          ||'mi'')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = '
        ||P_MTR_ID||
       ' AND s_mtrch   = ''1''
        AND s_datetime BETWEEN TO_DATE('
        ||P_FROM_DATE||
        ',''DD-MON-YY'') AND (TO_DATE('
        ||P_THROUGH_DATE||
        ',''DD-MON-YY'') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = '
        ||P_MTR_ID||
        ' AND s_mtrch   = ''2''
        AND s_datetime BETWEEN TO_DATE('
        ||P_FROM_DATE||
        ',''DD-MON-YY'') AND (TO_DATE('
        ||P_THROUGH_DATE||
        ','' DD-MON-YY'') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)';
    SELECT DBMS_XMLQUERY.GETXML('L_QUERY') INTO L_XML   FROM DUAL;
    INSERT INTO NK VALUES (L_XML);
    DBMS_OUTPUT.PUT_LINE('L_QUERY IS :'||L_QUERY);
    END;
    OUTPUT parameters are in bold (the issue is they are coming without single quotes otherwise th equery is fine
    L_QUERY IS :SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),'9999999.000') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),'$9,999,999.00') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),'9999999.000') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,'mm/dd/yyyy') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,'hh24:mi'), '00:00','24:00', TO_CHAR(s_datetime+.000011574,'hh24:mi')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = N3165 AND s_mtrch   = '1'
        AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,'DD-MON-YY') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = N3165 AND s_mtrch   = '2'
        AND s_datetime BETWEEN TO_DATE(01-JAN-13,'DD-MON-YY') AND (TO_DATE(31-JAN-13,' DD-MON-YY') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)

    The correct way to handle this is to use bind variables.
    And use DBMS_XMLGEN instead of DBMS_XMLQUERY :
    create or replace procedure temp (
      p_mtr_id       in varchar2
    , p_from_date    in date
    , p_through_date in date
    is
      l_xml   CLOB;
      l_query VARCHAR2(2000);
      l_ctx   dbms_xmlgen.ctxHandle;
    begin
      l_query:=  'SELECT
        a.s_datetime DATETIME,
        a.downdate Ending_date,
        a.downtime Ending_time,
        TO_CHAR(ROUND(a.downusage,3),''9999999.000'') kWh_Usage,
        TO_CHAR(ROUND(a.downcost,2),''$9,999,999.00'') kWh_cost,
        TO_CHAR(ROUND(B.DOWNUSAGE,3),''9999999.000'') KVARH
      FROM
        (SELECT s_datetime + .000011574 s_datetime,
          TO_CHAR(S_DATETIME ,''mm/dd/yyyy'') DOWNDATE,
          DECODE(TO_CHAR(s_datetime+.000011574 ,''hh24:'
          ||'mi''), ''00:'
          ||'00'',''24:'
          ||'00'', TO_CHAR(s_datetime+.000011574,''hh24:'
          ||'mi'')) downtime,
          s_usage downusage,
          s_cost downcost
        FROM summary_qtrhour
        WHERE s_mtrid = :P_MTR_ID
        AND s_mtrch   = ''1''
        AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
                           AND (TO_DATE(:P_THROUGH_DATE,''DD-MON-YY'') + 1)
        ) a,
        (SELECT s_datetime + .000011574 s_datetime,
          s_usage downusage
        FROM summary_qtrhour
        WHERE s_mtrid = :P_MTR_ID
        AND s_mtrch   = ''2''
        AND s_datetime BETWEEN TO_DATE(:P_FROM_DATE,''DD-MON-YY'')
                           AND (TO_DATE(:P_THROUGH_DATE,'' DD-MON-YY'') + 1)
        ) B
      where a.S_DATETIME = B.S_DATETIME(+)';
      l_ctx := dbms_xmlgen.newContext(l_query);
      dbms_xmlgen.setBindValue(l_ctx, 'P_MTR_ID', p_mtr_id);
      dbms_xmlgen.setBindValue(l_ctx, 'P_FROM_DATE', to_char(p_from_date, 'DD-MON-YY'));
      dbms_xmlgen.setBindValue(l_ctx, 'P_THROUGH_DATE', to_char(p_through_date, 'DD-MON-YY'));
      l_xml := dbms_xmlgen.getXML(l_ctx);
      dbms_xmlgen.closeContext(l_ctx);
      insert into nk values (l_xml);
    end;

  • Time machine need more space than necessary.

    Hi.
    Can anyone help. I have a two 1TB external hard drives one backs the other up. Until now it has been fine but suddenly time machine has decided theres not enough room even though there the same size drives. Ive deleted the backup drive and the time machine preferences but it still wont work. I have one drive with 930GB on and time machine says it needs 1.1TB of space. Why dose it need an extra 170GB. Help?
    Thanks

    Since Time Machine keeps copies of previous versions of things you've changed or deleted, it needs much more space than the data it's backing-up.
    The extra 170 GB is for "padding" -- workspace on the backup volume, and in case the original estimate isn't accurate, or more files are changed or added while the backup is running.
    #1 in the link Carolyn supplied explains this a bit more, and recomends that your backup drive be 2-3 times the size of the data it's backing-up.

  • Need more flexibility than photo sites.....

    Looking to make Christmas cards and photo books, and need more customization than allowed on Snapfish, etc.  Any recommendations?

    If you are in Europe try Photobox.com (and if I refer you with your email details I can get a bonus 'gift'!).  

  • All things being equal, does 4th gen quad core i7 (2.5GHz) need more RAM than 5th gen dual core i5 (2.7GHz) for same tasks?

    Bought mid/late 2014 15" MBPr with the i7 2.5GHz and 16GB RAM. I'm within 2 week exchange period at Best Buy, and thinking of switching to 2015 13" MBPr with i5 2.7GHz and 8GB RAM for battery life and weight, and they don't offer the upgraded 16GB RAM option for any 13" models.
    Needs to be Best Buy because of gift cards
    Use docked at office 75% of time vs 25% out of office, so portability and battery life are nice to have but not as important as if I had 8 hour flights on weekly basis
    See appeal of having 5th gen i5 in 13" vs my 4th gen i7 that will be updated this summer
    Not a gamer and don't use Final Cut or other video editing, but I tend to leave several applications open and usually have between 1-3 GB free RAM of the 16GB total at any time according to my Memory Cleaner readings
    Therefore, worried that 8GB is a non-starter as simple math says that I would have shortfall of 5-7GB RAM based on above readings, but wondering if dual core i5 will use RAM more efficiently than quad core i7
    Any help greatly appreciated!

    Also to note, would like this to last 4 years, which was reason that I bought the higher spec'd model even though I don't run games or video editing

  • Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

    Hello,
    When I run the query as below
    DECLARE @LoopCount int
    SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
    WHILE (
        SELECT Count(*)
        FROM KC_PaymentTransactionIDConversion with (nolock)
        Where KC_Transaction_ID is NULL
        and TransactionYear is NOT NULL
    ) > 0
    BEGIN
        IF @LoopCount < 0
            RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
                   16, -- Severity.
                   1 -- State.
    SET @LoopCount = @LoopCount - 1
    end
    I am getting error as "loop has run more times than expected (Loop Counter went negative)"
    Could any one help on this issue ASAP.
    Thanks ,
    Vinay

    Hi Vinay,
    According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
    then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
    To fix this issue with the current information, we should make the following modification:
    Change the code
    WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
    ) > 0
    To
    WHILE @LoopCount > 0
    Besides, since the current query is senseless, please modify the query based on your requirement.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

Maybe you are looking for

  • Adhoc Query in PA

    Hi, When I am entering tcode S_PH0_48000513 - Ad Hoc Query, it is giving me message "No user group created". I tried doing changes to SQ03, but not working. What is the issue here? Regards KP

  • Product Allocation--Can we Have in MC94 Planning for 2 or 3 materials

    Hi,Experts , As MC94 shows only one column for every Months allocation ,example 07.2009,08.2009......... then---> 1>can we have 2 or 3 materials and 2 or 3 customers for same Production allocation Procedures like can we have 0000000000001-Product All

  • Is it iCal's Memory Problem??

    Hi All, I am having about 2500 events and 500 tasks in my iCal's calendar, during syncing my iCal's calendar to my server's account constantly getting an exception (i.e Mac OS X Version 10.4.8 (Build 8L2127) 2007-01-11 10:20:06 -0800 2007-01-11 10:20

  • How to Downgrade iOS 5.1 to 5.0 on iphone 4s

    How to Downgrade iOS 5.1 to 5.0 on iphone 4s

  • Replacing a 2006 Mac Pro ver 1.1?

    You can the see the specs of my 1.1 Mac Pro appended in the tag line below This version of the pro is at a dead end the best OS X it will run is lion and the motherboard will not even support windows 7 under boot camp . As I am not retired my desktop