View with SAP tables fails with "No Owner" error

Dear experts,
We have created a view (VIEW_MARA_MAKT) on Information Steward (4.2 SP1) using SAP tables MARA and MAKT.  This view is working perfectly.  Next we create another view to join with our previous view (VIEW_MARA_MAKT) to table MARC.  The view validates correctly, but when trying to view the data we get the following error:
Data Services execution failed for VIEW_MATERIAL_PLANT. Error :
(14.2) 04-08-14 12:46:21 (E) (0432:6996) RES-020106: |SESSION JOB_VView_736_43f3f6da_b863_46d6_ad64_b4f432a939b0|DATAFLOW EABAPDF_VIEW736_0|STATEMENT <GUID::'4a0dddf9-c993-4577-9e9a-1e0adf2dc9e2::794ee432-24f4-4801-bccf-587ef489e934::65e45615-06f8-426f-abf2-61345f6c252f' READ TABLE ICCDS_21."".MARC OUTPUT(IS_VIEW_RDR_475_0)> Table <MARC> for owner <> was not found in the repository for datastore <ICCDS_21>. Import this table from the external source. If the name is case-sensitive in the database (and not all uppercase), enter the name as it appears in the database and use double-quotation marks around the name to preserve the case. (COR-10690)
It appears that DS is not satisfied that there is no owner name sent from IS, but for SAP connections it is not possible to specify owner names when adding the tables to IS.
Please can you give some recommendations to resolve this error.

What you can do is to use a table of record.
And create a block based on stored procedure.
Below is a table of record and procedure for querying from multi-tables.
If you want to update, insert, delete rows, you need to create 3 more procedures on the package. One for updating, another for inserting, and so on.
I never done updaing part before. So I need to spend time for coding.
If you want, I can post later as soon as I got it.
CREATE OR REPLACE PACKAGE TEST5 AS
TYPE REC1 IS RECORD (FIRST TEST1.FIRST%TYPE,
SECOND TEST1.SECOND%TYPE,
THIRD TEST2.THIRD%TYPE);
TYPE TAB1 IS TABLE OF REC1 INDEX BY BINARY_INTEGER;
PROCEDURE TEST1CREATE (P_TAB IN OUT TAB1);
END;
CREATE OR REPLACE PACKAGE BODY TEST5 AS
PROCEDURE TEST1CREATE(P_TAB IN OUT TAB1) IS
CURSOR C IS
SELECT A.FIRST,A.SECOND,B.THIRD
FROM TEST1 A, TEST2 B
WHERE A.FIRST=B.FIRST;
i NUMBER:=0;
BEGIN
OPEN C;
LOOP
EXIT WHEN C%NOTFOUND;
i:=i+1;
FETCH C INTO P_TAB(i).FIRST,P_TAB(i).SECOND,P_TAB(i).THIRD;
END LOOP;
END TEST1CREATE;
END;
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Hercules:
<HR></BLOCKQUOTE>
null

Similar Messages

  • RAC Install fails with "OCR upgrade failed with (-1073741819) on 10.2.0

    I am trying to install RAC 10.2.0 with two Server 2003 R2 nodes connected to a fiber channel SAN. I have set up the shared storage, created the extended partitions, and setup the unformatted no drive letter volumes. Both servers see the unformatted volumes (and no others do). The preinstallation checks work fine and the installations works fine until the first configuration assistant runs. That would be the "Oracle Clusterware Configuration Assistant". It tries to run crssetup.config.bat and fails on step 3 (configuring OCR Repository) with
    ocr upgrade failed with (-1073741819)
    If I uninstall clusterware and try again it will not show the OCR and Voting disks as available disks until I delete the volumes and partitions and recreate them. I have run the precheck batch files and they show everything as good as well. When I define the OCR and Voting drives in the clusterware setup I do not select format or assign drive letter. Every time I try to do the install I get the exact same error.
    Is there anything I am missing or some trick or patch I may not have done?

    Take a look at this notes!!
    Note 341214.1 - How To clean up after a Failed Oracle Clusterware Installation on Windows
    Note 388730.1 - Oracle RAC Clusterware Installation on Windows Commonly Missed / Misunderstood Prerequisites
    Rgrs,
    Paulo.

  • Import tables failed with message" BR1233E Owner name not allowed in param"

    Dear ALL,
    After finished sucessfully export some tables i try reimport the Tables, why give me a message like this screen below, Please help me,
    thanks and Best Regards,
    Chrisna
    FYI,
    I using the brspace command for export and import.
    Main options for import from dump file: /oracle/XXX/sapreorg/sdvmynml.edd/expdat.dmp
    1 * Import utility (utility) ............... [IMP]
    2 - Import type (type) ..................... [tables]
    3 - Owner for import (owner) ............... [SAPR3]
    4 * Tables for import (tables) ............. [SXNODES,SXROUTE,SXSERV,SXADMINTAB,SXDOMAINS,SOPR,SXADDRTYPE,SXCONVERT,SXCONVERT2,SXCOS,... (12 tables)]
    5 - Import table rows (rows) ............... [yes]
    6 - Import table indexes (indexes) ......... [yes]
    7 - Import table constraints (constraints) . [yes]
    8 - Import table grants (grants) ........... [yes]
    9 # Import table triggers (triggers) ....... [yes]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    c
    BR0280I BRSPACE time stamp: 2007-06-20 07.53.36
    BR0663I Your choice: 'c'
    BR0259I Program execution will be continued...
    BR1233E Owner name not allowed in parameter/option 'imp_table|-tables'
    BR0691E Checking input value for 'tables' failed
    BR0669I Cannot continue due to previous warnings or errors - you can go back to repeat the last action

    Thanks the issue already solved
    thanks and Best Regards,
    Chrisna

  • Creating a view on KONA & KONH -  Failed with 'Field name DATAB not unique'

    Hi all,
        I am creating a view on KONA (Rebate Agreement) & KONH (Conditions Header) tables. The link is KONA-KNUMA = KONH-KNUMA_AG.
       The field 'DATAB' exists in both tables and I have included those two fields under 'View Fields' . When I tried to activate the View, it fails with 'Field name DATAB not unique'.
       The following is the complete error message:
    Message no. MC060
    Diagnosis
    When you define aggregates (views, lock objects or matchcode objects), you can assign the aggregate fields your own names (the system assigns names to the basic fields automatically). These names must be unique.
    System response
    The system has established that the names assigned to the aggregate fields in an aggregate definition are not unique and issues an error message.
    Procedure
    Assign unique names to the aggregate fields in question.
       Why I am not able to create the view after including a field that EXISTS in two tables, PLEASE ?
    Thanks,
    Venkat.

    Hi,
    you can include both fields (but what for??); most likely KNUMH will also complain after fixing DATAB (just a guess...).
    You can include both fields if you name them differently (different aliases):
    In the view fields tab, enter two different names in the column "view field".
    this will work.
    hope this helps...
    Olivier.
    Message was edited by:
            Olivier Cora

  • Populating the cache with VPD tables fails

    Using the Add Tables wizard I am trying to add tables to an Oracle Database cache. The tables have had RLS policies applied using the DBMS_RLS package. The Add Tables wizard fails with an ORA-28112 error (failed to execute policy function).
    Are there any known problems with cacheing tables of a virtual private database?
    Of course we want to cache an entire table, not just a portion as defined by that table's security policy. We have a procedure call that can cause the security policy to return a null predicate (effectively turning off security for all tables). We have set up a logon trigger to run this procedure for the user that populates the cache (we have done this successfully for other users), but we still get the error.
    Any help?
    Cache system:
    SPARC/Solaris 8
    9iAS 1.0.2.1
    Oracle EE 8.1.7.0
    Origin database:
    SPARC/Solaris 8
    Oracle EE 8.1.7.0
    Thanks,
    Steve

    Steve,
    In my opinion you should look first into the trace files. They are generally located under
    USER_DUMP_DEST directory. I believe that this error normally generates a trace file. So in the trace file you can see that it was trying to do and what happened. Also this may be happening due to some permission problems as well e.g. logged user versus policy user etc.
    HTH
    Prakash
    null

  • Set-AzureServiceDiagnosticsExtension or publishing with diagnostitcs enabled fails with Azure SDK2.5 for existing Azure services

    Similar to https://social.technet.microsoft.com/Forums/systemcenter/en-US/487234f4-9748-4f49-ab7b-ce523da4c500/publish-cloud-service-fails-from-visual-studio-2013-update-4-published-asset-entry-for-image but since the given answer provides no solution
    and I found more details, I felt that opening this new question providing more details was necessary.
    I have an existing Azure service with two web roles (service and worker) published first in May 2012. Recently I tried to update from SDK2.2 to SDK2.5 and Visual Studio 2013 Update 2 to Update 4. The main reason behind this was to move from log4net to WAD
    and in doing so, of course directly move to the new diagnostics version. So before publishing I enabled WAD diagnostics logging in the properties of both roles.
    Trying to publish from Visual Studio to the exisiting Azure Service fails, VS output shows  the following lines:
    11:45:24 - Checking for Remote Desktop certificate...
    11:45:25 - Applying Diagnostics extension.
    11:45:45 - Published Asset Entry for Image Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml not found.
    What's working 1: For testing purposes, I have created a new Azure service in the Azure portal.
    Publishing the same solution from the same development environment to this new service is possible without problems - the service with WAD diagnostics logging is working fine. Unfortunately this is no solution to my production service, with
    DNS alias and SSL certificates bound to the existing Azure service.
    What's working 2: Publishing the solution to the existing Azure service WITHOUT diagnositics enabled works.
    Problem with that: Trying to activate WAD diagnostics logging after publishing using the Azure cmdlets also fails WiTH a similar error message. Following http://blogs.msdn.com/b/kwill/archive/2014/12/02/windows-azure-diagnostics-upgrading-from-azure-sdk-2-4-to-azure-sdk-2-5.aspx I
    tried:
    PS C:\> Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -DiagnosticsConfigurationPath $public_config -ServiceName $service_name -Slot 'Staging' -Role $role_name
    VERBOSE: Setting PaaSDiagnostics configuration for MyWebRole.
    Set-AzureServiceDiagnosticsExtension : BadRequest : Published Asset Entry for Image
    Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml not found.
    At line:1 char:1
    + Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -Diagnostic ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [Set-AzureServiceDiagnosticsExtension], CloudException
        + FullyQualifiedErrorId : Microsoft.WindowsAzure.CloudException,Microsoft.WindowsAzure.Commands.ServiceManagement.
       Extensions.SetAzureServiceDiagnosticsExtensionCommand
    The problem seems to be related to those service configuration parts in the cloud which are not replaced by a new deployment, so I compared both services using Azure cmdlet Get-AzureService.
    I found that the new service has properties which the old one is missing:
    ExtendedProperties      : {[ResourceGroup, myazureservice], [ResourceLocation, North Europe]}
    Is this a hint? Reinstalling or repairing Visual Studio is not the answer to this problem!!!
    What's the meaning of "Published Asset Entry for Image Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml"?
    [Perhaps MS will publish a newer version of its Azure cmdlets, but that's not today's story]
    So what are possible reasons or fixes for this behaviour? Going back to log4net is not my favorite. Even worse, while there are alternative logging solution,
    I currently have no performance counter monitoring in the Azure portal (Remote desktop and perfmon is NO solution). Is there any alternative to going back to SDK2.4?
    Best regards,
     Andreas

    Hi Andreas,
    Thanks for your feedback.
    I will test and reproduce your issue on my side. Any information, I will post back for you.
    Thanks for your understanding.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Service Clients failing with "The request failed with HTTP status 401: Unauthorized"

    Hello,
    We have implemented a solution using the SSRS web service clients as produced by the WSDLs.  We have deployed the application on one server, and via the code we are calling the SSRS on another server (i.e., not using ReportViewer, etc.).  We do
    know that the user signed in has permissions since from the same staging server the Report Manager and Reports can both be accessed and viewed.  However, it's only when we run the application and attempt to execute a report on the remote SSRS server that
    we get the following error:
    The request failed with HTTP status 401: Unauthorized. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String
    methodName, Object[] parameters) at AMC.AssetTracker.Reporting.ReportExecution.ReportExecutionService.LoadReport(String Report, String HistoryID) at AMC.AssetTracker.Reporting.Report.GetReportByteArray() at AMC.AssetTracker.Reporting.Report.get_ReportStream()
    at AMC.AssetTracker.Reporting.Report.get_GetWebUIDisplay() at AMC.AssetTracker.UserControls.AssetTrackerMain.btnGenerate_Click(Object sender, EventArgs e)
    The credentials we have tried are the DefaultCredentials.  One question I have is how do you use custom credentials if neither the DefaultCredentials or the DefaultNetworkCredentials work?  Also, this service call does have to go through an ISA
    server before reaching the SSRS server.
    Any ideas?
    Thanks.

    It sounds like Kerberos authentication is needed in your situation since it's not on the same box.
    There is a one-hop limit with NTLM authentication.
    For more info please see the link below:
    http://social.msdn.microsoft.com/Forums/en-US/sqlreportingservices/thread/452e9627-cd8e-4709-bdd0-fbafcf9fd719
    Hope this helps!
    Thanks, Michael Mei

  • SAP Sourcing 7.0 integration with SAP ERP - issue with SAP PI

    Hi All,
    We are integrating SAP Sourcing 7.0 with SAP ERP 6.0 with SAP PI 7.1 in middle. So the Sourcing will talk to PI and this intern will talk to ERP. Sourcing is on oracle 11g where as PI and ERP are on SQL 2008. All these 3 are on windows 2003 x64.
    We are following the "Configuration Guide Integration of SAP ERP and SAP Sourcing 7.0" provided by SAP.
    We have successfully configured Sourcing and PI systems as pe the document. In Sourcing, by using background jobs we are able to successfully generate files in "export" folder which is part of FTP directory.
    However the issue is with PI, we configured the "configuration scenerio" in Integration builder and point all channels to the FTP folder in Sourcing. But PI is not picking up these files.
    Is there any way to trigger this in PI system. Or do we need to do anything in sourcing itself.
    Regards,
    Siva.

    Hello Siva,
    I am thinking that you may need to deploy FTP adapter in PI to get this process work fine but not sure.
    Let's see what other experts suggest.
    Thanks,
    Siva Kumar

  • Creating Job with CmdExe Step failed with error Reason : 5

    Hi, I setup a job with Type = Operating System (CmdExec) and Run as 'SQL Server Agent Service Account' , it failed to run with Message below :
    Message
    Executed as user: PROD\sqlserveragent. The process could not be created for step 4 of job 0x5A83AE4A12AEF649888E85F4072604F6 (reason: 5).  The step failed.
    1. I couldn't find what is meaning of Reason 5 here.
    2. This step used to work previously and it 'not working' suddenly these few days. I wondering anyone make the changes but I am not able to trace it. Any guide to troubleshoot would be helpful.
    Thanks .

     Operating System ... (reason: 5).  The step failed.
    Operating System Error code 5 = Access denied, seems to be a permission issue.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Best way to deal with Mutating table exception with Row Level Triggers

    Hello,
    It seems to be that the best way to deal with Mutating Table exception(s) is to have to put all the trigger code in a package & use it in conjunction with a Statement level trigger .
    This sounds quite cumbersome to me . I wonder is there any alternative to dealing with Mutating table exceptions ?
    With Regards

    AskTom has a good article about this,
    http://asktom.oracle.com/tkyte/Mutate/index.html

  • SOA 11g: Intergation with BAM 11g failed with ThreadPool has stuck threads

    Hi,
    I have installed BAM and SOA server on local host. Now while integrating BAM with BAMAdapter it's going into long running state and finally failing with below error.
    <Notice> <Diagnostics> <mars.as.local> <soa_server1> <[ACTIVE] ExecuteThread: '14' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <c82d73b0a2a7776f:37a0afd7:130d1278d97:-8000-0000000000028a7f> <1309264042380> <BEA-320068> <Watch 'StuckThread' with severity 'Notice' on server 'soa_server1' has triggered at Jun 28, 2011 1:27:22 PM BST. Notification details:
    WatchRuleType: Log
    WatchRule: (SEVERITY = 'Error') AND ((MSGID = 'WL-000337') OR (MSGID = 'BEA-000337'))
    WatchData: DATE = Jun 28, 2011 1:27:22 PM BST SERVER = soa_server1 MESSAGE = *[STUCK] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "638" seconds working on the request "weblogic.servlet.internal.ServletRequestImpl@16e92176[*
    *POST /soa-infra/services/default/BAMInsert/bpelprocess1_client_ep HTTP/1.1*
    Connection: TE
    TE: trailers, deflate, gzip, compress
    User-Agent: Oracle HTTPClient Version 10h
    SOAPAction: "process"
    Accept-Encoding: gzip, x-gzip, compress, x-compress
    ECID-Context: 1.c82d73b0a2a7776f:2ced6156:130d1186ef6:-8000-00000000000046b2;kYjE0ZJOoOTLkKPOoLRKlSODoITT_G
    Content-type: text/xml; charset=UTF-8
    Content-Length: 270
    *]", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:*
    *Thread-139 "[STUCK] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'" <alive, suspended, sleeping, priority=1, DAEMON> {*
    *java.lang.Thread.sleep(Thread.java:???)*
    oracle.bam.common.remoting.BamEjbClient.getSession(BamEjbClient.java:973)
    oracle.bam.common.remoting.BamEjbClient.getADCSession(BamEjbClient.java:350)
    oracle.bam.adc.api.util.Context.<init>(Context.java:270)
    oracle.bam.adapter.adc.CachedConnection.<init>(CachedConnection.java:134)
    oracle.bam.adapter.adc.ADCManagedConnectionFactory.getCachedConnection(ADCManagedConnectionFactory.java:490)
    ^-- Holding lock: java.lang.String@13416588[fat lock]
    What could be root cause of this? Can anyone please help to find the solution?
    Even though both servers on same host why BAM connection is getting to much time and Putting server in warning state like :
    +[STUCK] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'+
    *Do i need to modify any config file at Admin or BAM level? if Yes, what parameter Do i need to change?*
    Please guide.
    Thanks,
    Sagar.

    Further Updates on this :
    I further debugged and found out few more things. This might help.
    In Weblogic Admin console Home > Summary of Deployments >oracle-bam(11.1.1)
    MDB application oracle-bam is NOT connected to messaging system with below error.
    EJBs
    MessageDispatcherBean - Error java.lang.IllegalArgumentException: Getting Deployment configuration...
    connectionFactoryJNDIName - Red Cross with warning
    destinationJNDIName - Red Cross with warning
    resourceAdapterJNDIName - Red Cross with warning
    Modules
    Also please check out another similar thread and please update.
    BAMAdapter Issue : java.rmi.ConnectException: Destination unreachable;
    Thanks,
    Sagar

  • Business Objects XI 3.0/3.1- Connection with SAP ERP not with BW

    Hi all members,
    I have to confirm that can we connect "SAP Business Objects XI 3.0 or XI 3.1" with SAP ERP.
    My client does not have SAP Business Warehouse. Neither they are interested to have SAP BW.
    So, I have to confirm that can we connect "SAP Business Objects XI 3.0 or XI 3.1" with SAP ERP ?
    Any other thing i have to go for?
    Kindly any member guide me in bit detail.
    Thanks and Best Regards,
    Izhar

    Hi,
    currently it is not possible to build web intelligence reports fetching data directly from an SAP ERP system. The reason is that SAP ERP data sources are not supported in universes.
    Regards,
    Stratos
    PS: An option is to deploy a BOBJ rapid mart. Please note that in this case you require the BOBJ Data services (ETL) tools. The idea behind the SAP ERP related rapid marts is that in the first step the data are extracted from your SAP ERP sources and load them into a relational database (using a predifined DWH scheme). Additionally the package provides you with universes (which rely on this predefined DWH scheme) and use the database as data sources, here your SAP ERP data were extracted to.

  • SP2 for SQL Server 2012 with SP1 is failed with Error result: -2067529723

    SP2 for SQL Server 2012 with SP1 is failed when start the installtion from command prompt and thorws below errors in Passive node of the cluster.No other errors logged in eventviewer, temp folder and not created any log files in bootstarp folder.An error occurred during the SQL Server 2012 Setup operation.
    Error result: -2067529723
    Result facility code: 1220
    Result error code: 5
    For more information, review SQL Server 2012 Setup logs in your temp folder.It is not allowing to run the sql core setup to uninstall the cluster node and gives same error.Can any one got into the same issue and please help?ThanksPetchikumar

    Hi,
    Can you post summary.txt below link will help you locate it
    https://msdn.microsoft.com/en-us/library/ms143702%28v=sql.110%29.aspx
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • RFC calls with SAP table insert - lock tables

    Hi,
    I have an external server program, which is calling a rfc function many times the same time with different transaction types. There are transaction types, which are downloading information from SAP (only send data back to the caller), and there are transaction types which are uploading data to SAP (inserting/updating data on SAP).
    The function calls are synchronous as the server needs to get feedback directly. The parallel work processes for RFC calls are limited, so a system overload can´t happen.
    The problem now is, if the server is down for while and going to be restarted, it´s opening parallel threads and calls the function in SAP at nearly the same time. That means, it will call the function with upload requests (table must be locked), and download requests the same time.
    In the upload requests, the table has to be locked, that it can´t occur that we get wrong entries...
    I can only enqueue the table with just two fields... I know, that is almost like locking the whole table, but it´s not possible to lock it in a different way.
    I tried to use the enqueue function with the parameter WAIT, but it didn´t help, as there where too many parallel calls and after some seconds, they were ending up with an error (Because the table has been locked by another call).
    It seems that it´s trying to lock the table again for all parallel calls at exactly the same time....
    The calls has to be synchronous, as the server needs to get the feedback directly. Any ideas how top solve this, that it handles all incoming calls parallel and waits for the table is unlocked again?
    Thank you for your help!

    Hi,
    thank you for your answer!
    I have investigated something new yesterday:
    I thought at the beginning, that the problem just occurs, when the system work processes are reached. The system has for example set up 15 dialog processes and the external server is calling the function 20 times in parallel. Then we normally need 5 dialog processes more. The system is taking then all 15 dialog processes and the locks got stucked.
    That means, that I´ll maybe only get 4 or 5 uploads, which have really been updated the tables. All the others couldn´t get the table locked for their process.
    But if I now let the server call the function just about 13 times in parallel, nearly all uploads have been updated the table!
    From my point of view, the lock from SAP gets stucked, when the limit of dialog processes is reached. It´s not working in the right way anymore...
    The same is happening, when you set up in the system, that only 10 work proecesses can be used by RFC. If I have more than 10, it´s going to be critical with the locks....
    WEIRD!
    Can anybody help me out of trouble?
    Thank you!

  • A view from a table.. with cursors?

    i have a table with this:
    contract - date de execução - action
    3034 - 01012006 -72
    3034 - 10022006 -73
    2011 - 05012006 -74
    2011- 10012006 -73
    and I want a view with this:
    3034 - 01012006 - 72 - 10022006 - 73
    2011 - 05012006 - 74 - 10012006 - 73
    how can I extract this fastly?
    thanks

    There can be too many interpretations of your data and what you want to get -
    do you want for example to have last and first values amongth particular group
    ordered by date - for example:
    SQL> select * from t order by 1,2;
    CONTRACT   DATE#            ACTION
    2011       05012006             74
    2011       07012006             70
    2011       10012006             73
    3034       01012006             72
    3034       05012006             71
    3034       10022006             73
    6 rows selected.
    SQL> select unique contract,
      2  first_value(date#) over(partition by contract order by date#) date1,
      3  first_value(action) over(partition by contract order by date#) action1,
      4  last_value(date#) over(partition by contract order by date#
      5  range between unbounded preceding and unbounded following) date2,
      6  last_value(action) over(partition by contract order by date#
      7  range between unbounded preceding and unbounded following) action2
      8  from t
      9  /
    CONTRACT   DATE1           ACTION1 DATE2           ACTION2
    2011       05012006             74 10012006             73
    3034       01012006             72 10022006             73Please, make your task clear.
    Rgds.

Maybe you are looking for