PL/SQL table with several arguments

Hi All!
I need a pl/sql table with several arguments. It should looks like this:
DECLARE
CURSOR C1
IS
SELECT ARTICLE_ID,
SALES_SECTION_ID,
COUNTRY_ID,
COLL_OF_DATA_TYPE_ID,
SYSTEM_ID
FROM F_IPV_BASE;
CURSOR C2
IS
SELECT ARTICLE_ID,
SALES_SECTION_ID,
COUNTRY_ID,
COLL_OF_DATA_TYPE_ID,
SYSTEM_ID,
COUNT (ALL ORDER_POSITION_NO) CREDIT_NOTE_POS_QTY
FROM F_IPV_BASE
WHERE DAY_ID BETWEEN tFirstDayofMonth AND tLastDayofMonth
AND BILLING_METHOD_ID = (SELECT BILLING_METHOD_ID
FROM K_BILLING_METHOD
WHERE SRC_BILLING_METHOD_ID = '2')
GROUP BY ARTICLE_ID,
SALES_SECTION_ID,
COUNTRY_ID,
COLL_OF_DATA_TYPE_ID,
SYSTEM_ID;
TYPE CREDIT_NOTE_POS_QTY_REC IS RECORD (
ARTICLE_ID F_IPV_BASE.ARTICLE_ID%TYPE,
SALES_SECTION_ID F_IPV_BASE.SALES_SECTION_ID%TYPE,
COUNTRY_ID F_IPV_BASE.COUNTRY_ID%TYPE,
COLL_OF_DATA_TYPE_ID F_IPV_BASE.COLL_OF_DATA_TYPE_ID%TYPE,
SYSTEM_ID F_IPV_BASE.SYSTEM_ID%TYPE,
CREDIT_NOTE_POS_QTY NUMBER
TYPE CREDIT_NOTE_POS_QTY_TYP IS TABLE OF CREDIT_NOTE_POS_QTY_REC
INDEX BY VARCHAR2 (30);
CREDIT_NOTE_POS_QTY_TAB CREDIT_NOTE_POS_QTY_TYP;
tPosQTY NUMBER := 0;
BEGIN
FOR TmpRec1 IN C2
LOOP
CREDIT_NOTE_POS_QTY_TAB (TmpRec1.ARTICLE_ID, TmpRec1.SALES_SECTION_ID, TmpRec1.COUNTRY_ID, TmpRec1.COLL_OF_DATA_TYPE_ID, TmpRec1.SYSTEM_ID).CREDIT_NOTE_POS_QTY := TmpRec1.CREDIT_NOTE_POS_QTY;
END LOOP;
FOR TmpRec1 IN C1
LOOP
IF (CREDIT_NOTE_POS_QTY_TAB.EXISTS (TmpRec1.ARTICLE_ID, TmpRec1.SALES_SECTION_ID, TmpRec1.COUNTRY_ID, TmpRec1.COLL_OF_DATA_TYPE_ID, TmpRec1.SYSTEM_ID) = TRUE)
THEN
tPosQTY := CREDIT_NOTE_POS_QTY_TAB (TmpRec1.ARTICLE_ID, TmpRec1.SALES_SECTION_ID, TmpRec1.COUNTRY_ID, TmpRec1.COLL_OF_DATA_TYPE_ID, TmpRec1.SYSTEM_ID).CREDIT_NOTE_POS_QTY;
ELSE
tPosQTY := 0;
END IF;
END LOOP;
END;
I get PLS-00316 in loops.
Any help will be appreciated.
With best regards,
Andrej Litowka.

Hi
the .exists(n#) method will just check if the n# element exists. there is no build in function to locate or compare record or who oracle says;
Comparing Records
Records cannot be tested for nullity, or compared for equality, or inequality.
If you want to make such comparisons, write your own function that accepts two records as parameters and does the appropriate checks or comparisons on the corresponding fields.

Similar Messages

  • Can't update a sql-table with a space

    Hello,
    In a transaktion I'm getting some Values from a SAP-ERP System via JCO.
    I update a sql-table with this values with a sql-query command.
    But sometimes the values I get from SAP-ERP are empty (space) and I'm not able to update the sql-table because of a null-value exception. (The column doesn't allow null-values). It seems that MII thinks null and space are the same.
    I tried to something like this when passing the value to the sql-query parameter but it didn't work:
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
    this works but I don't want to have a "_"
    stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", "_")
    Any suggestions?
    thank you.
    Matthias

    The problem is Oracle doesn't know the space function. But it knows a similar function: NVL --> replaces a null value with something else. So this statement works fine for me:
    update marc set
    LGort = '[Param.3]',
    dispo = '[Param.4]',
    schgt = NVL('[Param.5]', ' '),
    dismm = '[Param.6]',
    sobsl = NVL('[Param.7]',' '),
    fevor = '[Param.8]'
    where matnr = '[Param.1]' and werks = '[Param.2]'
    If Param.5 or Param.7 is null Oracle replaces it with a space in every other case it is the parameter itself.
    Christian, thank you for your hint with the space function. So I remembered the NVL-function.
    Regards
    Matthias

  • Populate SQL table with data from Oracle DB in ODI

    Hi,
    I am trying to populate a source SQL table with fields from an Oracle db in ODI. I am trying to perform this using a procedure and I am am getting the following error:
    ODI-1226: Step PROC_1_Contract_Sls_Person_Lookup fails after 1 attempt(s).
    ODI-1232: Procedure PROC_1_Contract_Sls_Person_Lookup execution fails.
    ODI-1228: Task PROC_1_Contract_Sls_Person_Lookup (Procedure) fails on the target MICROSOFT_SQL_SERVER connection Phys_HypCMSDatamart.
    Caused By: weblogic.jdbc.sqlserverbase.ddc: [FMWGEN][SQLServer JDBC Driver][SQLServer]Invalid object name 'C2C_APP.CON_V'.
    My question is what is the best method to populate SQL db with data from an Oracle db? Using a procedure? A specific LKM?
    I found threads referring to using an LKM to populate Oracle tables with data from a SQL table....but nothing for the opposite.
    Any information would help.
    thanks,
    Eric

    Hi Eric,
    If using an Interface, I would recommend the LKM SQL to MSSQL (BULK) knowledge module. This will unload the data from Oracle into a file, then bulk load the staging db on the target using a BULK INSERT.
    Regards,
    Michael Rainey

  • How to integrate SQL table with the cahce

    Hi,
    In normal ASP.net application we use the
    sqlCacheDependency to integrate the SQL server with the cache in ASP.NET, so that any change in the SQL table row will replace the cache with the latest data.
    How to achieve the same in the Azure cache.
    We need to integrate the SQL server with the Azure cache so that any change in the SQL table row should replace the cache with the latest data.

    Hi,
    Cache in Azure is not different with ASP.NET, please see
    http://msdn.microsoft.com/en-us/library/windowsazure/gg278356.aspx for more details, Azure provides for multiple types of persistent storage which can be leveraged for caching (Azure SQL Database, Azure Table Storage, Azure Blob Storage etc…). I would suggest
    you read this article (http://www.dnnsoftware.com/blog/cid/425642/Understanding-Windows-Azure-Caching-for-building-high-performance-Websites
    ), because of we know where the cache data is, so we can sync up the data as expected.
    Best Regards
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to register PL/SQL function with Varchar2 argument in Discoverer

    Hi,
    I have registered a PL/SQL function in Discoverer Administrator 10.1.2.1. Function has two arguments with data type varchar2. In discoverer I've selected varchar as it doesn't have varchar2.
    When i use this function in report using discoverer desktop it gives an error "One of the function argument has an incorrect datatype.
    I would appreciate if somebody can help
    Regards
    BA

    Hi,
    First, do not wait to use it in order to check it.
    In the discoverer admin you can "Validate" the function on after registering it.
    there are couple of things you should know about registering the function:
    1. the "Varchar" option you selected is correct (there is no varchar2 in the admin definition).
    2. check the database function to verify that you indeed registered all its arguments and they match the names and types of the db function.
    3. during the registration, type everything in UPPER CASE.
    the other way to register the function (and you will not need to deal with the definition of it) is to search for the function in a list.
    in order to do that, on the register function screen press the "Import" button (on the bottom right side of screen).
    then search by the owner (db user / schema) that the function / package is registered.
    Tamir

  • Data and Cleansing export TO SQL table with Melissa Data appended fails

    I am using Data Quality services with Melissa Data Address Check as reference data.  Everything works fine until I take the option to export Data and Cleansing Info which will give me my cleansed data plus additional data points such as geocodes from
    Melissa.  When I do it fails with the error below.
    (Failed to create a new table geocode in database DQS_STAGING_DATA. Check whether the table already exists  and have the database administrator make sure the DQS Service has CREATE TABLE rights in the destination database and can INSERT to the destination
    table.)
    This error makes no sense as the table does not exist and I do have proper rights. I can export Data and Cleansing data if Melissa Data is not involved  ,  when I dig further it seems to be complaining about column header lengths.   
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionCod' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_DeliveryPointCo' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_ResponseRecordI' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_DeliveryPointCh' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionLev' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CongressionalDi' is too long. Maximum length is 128.;
    The identifier that starts with 'Address Validation_Melissa Data Corporation - Address Check - Verify, Correct, Geocode US and Canadian Addresses_CBSADivisionTit' is too long. Maximum length is 128.;
    I can see no option to control these column headers in DQS.  Has anyone else experienced this ?  Does anyone know of a workaround ? 
    I have already reported to Melissa data and they agreed the problem was the column header length but said they also had no control of that.

    Hello,
    You can create an SR with a based outbound filter. All object that match the filter will be provisioning to CS SQL (if you do not define filter, all objects will be provisioning).
    Or you can create an MVextension rules
    Regards,
    Sylvain

  • 1 SQL instances with several archive Databases using all AWE RAM memory of server

    Hello,
    I just migrated my accounting system to a new SQL Server deployment of the software.
    We just purchased the expensive SQL Server enterprise to accomodate.
    I have some replicated databases to of lower priority that I put on the same instance that we occasionally query.  I also imported a 70GB old archive DB that we on very rare occasions.  We are not concerned about performance on these databases
    as we are about the accounting DB on the same instance.
    The MAX memory was set to unlimited on that instance.  As soon as I put in this monster 70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    Is there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    Or do I have to shell out another $3-6k for a separate instance?  SQL Server Express has a 4GB limitation and one of the backup DB we don't really care about is 20GB replicated from Azure.

    Hi,
    >>70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    How did you checked that Archived database is using 30 G did you used sys.dm_os_buffer_descriptor.Do SQL server have locked pages in memory
    SQL Server bring pages in memory if it is requested ,if you access Archived database heavily its bound to take memory but if yous top accessing it and access your other database SQL server will flush out pages of archive IF REQUIRED.
    SQL server manages memory dynamically so I guess you do not need to worry
    >>s there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    No ,there is no way buffer pool is shared region
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Takes long time to drpo tables with large numbers of partitions

    11.2.0.3
    This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process.
    This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on.
    anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds.
    I can't change the build process. My only option is to try to make this run a little faster. I can't do anything about the hardware (lots of VMs crammed onto too few servers).
    This is not a production issue. Its more of a hassle.

    Support Note 798586.1 shows that DROP USER CASCADE was slower than dropping individual objects -- at least in 10.2    Not sure if it still the case in 11.2
    Hemant K Chitale

  • How to add hash table values to SQL Table using Powershell

    Hi,
    I have sharepoint list with four(column1, column2, column3,column4)columns.I am reading the list column values and adding to hashtable. Now I want to add values from hastable to SQL table with four(column1, column2, colum3,column4)columns using powershell.
    I have written the following script for single column but I would like to know how to add values for multiple columns.
                if(($key -eq "Column1") )
                   $SqlQuery = "INSERT INTO [TableName] ([Column1]) VALUES ('" + $HashTable.Item($key) +"')"
                   #Set new object to connect to sql database
                   $connection = new-object system.data.sqlclient.sqlconnection
                   $Connection.ConnectionString ="server=SQLServerName;database=SQLDBName;Integrated Security = True;"
              $connection #List connection information 
                   $connection.open() #Open Connection
                 $Cmd = New-Object System.Data.SqlClient.SqlCommand
                 $Cmd.CommandText = $SqlQuery
                   $Cmd.Connection = $connection
                  $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
                  $SqlAdapter.SelectCommand = $SqlCmd
                  $DataSet = New-Object System.Data.DataSet
                  $SqlAdapter.Fill($DataSet)
                  $DataSet.Tables[0]
                  $connection.Close()
    Can anybody please help me out to accomplish the task? Any help would be greatly appreciated.
    AA.

    Hi AOk2013,
    Not knowledgable on PowercShell, based on my understanding on HashTable in Java, Some modification you can make in your code to achieve your requirement.
    If the Keys in HashTable are "Column1","Column2","Column3","Column4", you can reference below.
    if(($key -eq "Column1") ) #what is the purposed of this if ?
    #$SqlQuery = "INSERT INTO [TableName] ([Term]) VALUES ('" + $HashTable.Item($key) +"')"
    #specify the real column names in the table
    $SqlQuery = "INSERT INTO [TableName] ([ColumnA],[ColumnB],[ColumnC],[ColumnD]) VALUES ('" + $HashTable.Item("Column1") +"','"+ $HashTable.Item("Column2") +"','"+$HashTable.Item("Column3") +"','"+$HashTable.Item("Column4") +"')"
    #Set new object to connect to sql database
    $connection = new-object system.data.sqlclient.sqlconnection
    $Connection.ConnectionString ="server=SQLServerName;database=SQLDBName;Integrated Security = True;"
    $connection #List connection information
    $connection.open() #Open Connection
    $Cmd = New-Object System.Data.SqlClient.SqlCommand
    $Cmd.CommandText = $SqlQuery
    $Cmd.Connection = $connection
    $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
    $SqlAdapter.SelectCommand = $SqlCmd
    $DataSet = New-Object System.Data.DataSet
    $SqlAdapter.Fill($DataSet)
    $DataSet.Tables[0]
    $connection.Close()
    Since your question is regarding PowerShell, I would suggest you post it in a dedicated
    PowerShell Forum. It is more appropriate and more experts will assist you.
    If you have any feedback on our support, you can click
    here.
    Eric Zhang
    TechNet Community Support

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • Support for Array Binding and PL/SQL tables (IN, INOUT, or OUT)

    I have attempted, unsuccessfully, to use array binding in stored procedure/function calls where the sp/sf has parameters that are PL/SQL tables. I have seen the topic floating around in this forum, but I have not seen the explicit questions:
    - Does ODP.NET support PL/SQL tables as IN, INOUT or OUT parameters to stored procedures/functions?
    - Will any planned ODP.NET release support PL/SQL tables as IN, INOUT or OUT parameters to stored procedures/functions?
    I am aware that I can use REF CURSORS to handle the OUT situation, but I need to make a design decision concerning supporting parameters of IN and INOUT PL/SQL tables.
    Thanks.
    James

    You really MUST do this!! - i.e. include support for PL/SQL table parameters (IN INOUT and OUT) in a future release of ODP.NET.
    PL/SQL tables are a fundamental type in Oracle stored procedures and you will be preventing a huge number of existing projects from migrating to .NET if you don't acknowledge them as part and parcel of Oracle programming.
    I sincerely hope support for PL/SQL table parameters is treated as a serious issue.
    Think what a coup it would be for you over Microsoft (who don't currently support PL/SQL tables with their .NET native provider for Oracle and don't look as if they will at least in the short term)!
    Please, please, please!

  • Obiee 11g Noob question - bring in one sql table

    I am a new to obiee.
    Using version 11g and trying to bring in one sql table. I created my db and table, then I create the data source in odbc. Everything tests fine.
    I then open up the oracle BI administration tool, and 'import metadata'. Everything imports fine - under the 'physical' side I see my sql table and if I hit 'view data' I can see my data just fine.
    I think my problem is that under 'business model and mapping' the icon isn't green it has what looks like a red 'no' type icon. My guess is there is something I have to make that green before I will be able to create a report using my sql table. (diagram?)
    This is just one sql table with no foreign keys or even a pkey. How can I bring in just this one table?

    I created the key in the physical layer...
    then I dragged over the table and then dragged it over again (into the BMM) this created a table called (IT_Metric #1)..
    now I right click and select 'business model diagram', 'whole diagram' which opens a new window with my one table (IT_Metric) then the mouse cursor just shows waiting (hourglass) ...it never stops the hourglass..

  • Calling PL/SQL-package, returning PL/SQL-table

    Hi,
    I'm trying to call a PL/SQL-function returning a PL/SQL-table with two numbers. The below code gives me the cryptic error 'Invalid column index'. Does anyone know how to do this ? I want to display tab_innlogginger (1) and tab_innlogginger (2) in the report...
    <dataSet id="Innlogginger">
    <sql dataSourceRef="DWH-PL">
    <![CDATA[
    declare
    tab_innlogginger dwh_lib.tabdef_innlogginger;
    begin
    tab_innlogginger := dwh_lib.tellinnlogginger (:l_fra_dato, :l_til_dato);
    end
    ]]>
    </sql>
    <input id="l_fra_dato" value="${l_fra_dato}" dataType="xsd:date"/>
    <input id="l_til_dato" value="${l_til_dato}" dataType="xsd:date"/>
    </dataSet>
    The dwh_lib.tabdef_innlogginger is defined as:
    type tabdef_innlogginger is table of number index by binary_integer;
    Regards
    Erik

    OK, found something here:
    Re: steps to create BI publisher report through oracle stored procedure
    Seems pipelined functions using Oracle-objects should work.

  • SSRS/Powerview to compare SQL table and excel sheet

    I have a SQL table and an excel sheet with some data...
    I want to be able to compare the two and find out which Excel rows are missing in the SQL table...
    Would it be easier to do this report in SSRS or would it be better to do it in Excel PowerView?
    If so how do I go about it?
    Thanks in advance for your help...
    Dhananjay Rele

    Hi Dhananjay,
    According to your description, you want to compare the data of a SQL table and an excel sheet. To achieve this goal, we can create two tables in Reporting Services report, one for SQL table with SQL Server connection type, another for excel sheet with ODBC
    connection type.
    For more details about how to create the report, please see the following steps:
    Create a report server project with SQL Server Data Tools (SSDT) Business Intelligence Templates list.
    Create a new report definition file in Solution Explorer.
    Create a Data Source named DataSource1 with Microsoft SQL Server Type, then select the SQL table database from the corresponding server.
    Create a Data Source named DataSource2 with ODBC Type, then select the excel file.
    Create two datasets which returns the SQL data and Excel sheet data based on the two data source, one for DataSource1, another for DataSource2.
    Create two table next to each other based on the datasets on the design surface.
    References:
    Create a Basic Table Report (SSRS Tutorial)
    Create SSRS report using Excel Data Source Step by Step
    If there are any other questions, please feel free to ask.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Fill internal table with mutliple entries for nested structure

    Dear ABAP Experts,
    I have a question related to fill internal tables with nested structures.
    I have a structure like this:
    BEGIN OF proto,
              sicht TYPE ysicht,
              version TYPE FAGLFLEXA-RVERS,
              BEGIN OF kons,
    kon TYPE YKONSEINHEIT,
              END OF kons,
              jahr TYPE CHAR04,
    END OF proto.
    Now I need to fill this structure with values (over an internal table), but how can I achieve that I save multiple datas für element "kon" für one single entry of structure "proto"?
    An example could be:
    sicht = '01'
    version = '100'
    kon = 1001 (first entry)
    kon = 1002 (second entry)
    usw... (n entry)
    jahr = '2008'
    Thanks in advance for every helpful answer.
    Regards
    Thomas

    BEGIN OF proto,
               sicht TYPE ysicht,
               version TYPE FAGLFLEXA-RVERS,
               kons TYPE STANDARD TABLE OF YKONSEINHEIT WITH NON-UNIQUE KEY TABLE_LINE,
               jahr TYPE CHAR04,
    END OF proto.
    DATA: ls_proto TYPE proto,
          lt_proto TYPE STANDARD TABLE OF proto,
          ls_kon
    ls_proto-sicht = '01'.
    ls_proto-version = '100'
    INSERT '1001' INTO TABLE ls_proto-kons.
    INSERT '1002' INTO TABLE ls_proto-kons.
    ls_proto-jahr = '2008'.
    INSERT ls_proto INTO TABLE lt_proto
    If you're going to use a more complicated inner table with several components, then you need to define a type for those components. 
    matt

Maybe you are looking for

  • How can I use a custom domain name?

    I noticed that when I try to publish my website, it prompts me to provide a domain name which will have ".businesscatalyst.com" tagged on the end. Is there a way to publish this site to a domain name I already own?

  • How to insert a google chart in rtf

    Hi All, I come to know that we can insert any google chart in our rtf report for our given data. Please let me know what are the pre-requisites for this. and what is the code to be entered in the dummy image web tab I am trying to get a Google QR Cod

  • Problem in Marshaling UnMarshaling with @XmlAnyElement using JAXB

    Hi all, I am having a class annotated with XML binding annotations. I am using this class to marshal and unmarshal XML content. I am having one class named Data. This class is field of some other class. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(

  • Error: SAP installation of ERP Sr3 with ORACLE 10g on Windows 2008 server

    Dear all, while installaing SAP ERP SR3 with ORACLE 10g on windows 2008 server... we are getting error as not support. please confirm wether this version of OS is uspported or not. thanking you in all, by Animesh

  • Does anyone know of a JPEG 2000 plugin for importing into iPhoto 6

    I was quites surprised when I learned that iPhoto 6 had not been designed with jpg 2000 format in it. With that said, if anyone knows of a method or plugin for importing them into iPhoto I would greatly appreciate it. Thanx, -Al-