Clearing of qrfc and trfc tables c

Hi all
we planned to have a bi system
created a client and defined rfc's from r/3 system
no we stoped the bi project...removed client for bi
but the rfc's are still there...so the rfc tables grew up huge
How to clear those RFC tables
And how delete the defined RFC's
thanks in advance
jameel

Hey Jameel,
Those queue saves the information that has changed in the R/3 system,
this is done in order to move the "delta changes" from R/3 to BI.
They are created by customization changes,
and every relevant change in the R/3 (of the object that is being inspect)
would create a record in the queues.
If you would schedule the relevant extractors in the BI
they would read this information and the queues would be empty.
if you don't want to transfer this information to the BI,
than you will have to disable the delta queue in the customization,
and delete those queues.
(if you will only delete the queues without changing the customization
than they would re-create).
About the trfc,
I don't know if they are relevant,
please write more information (what is the destination of the TRFC, what are the functions and the error messages)
p.s
   this is a BI questions

Similar Messages

  • The difference between qRFC and tRFC

    I am studying RFC now,i don't understand when i should use qRFC or tRFC.can any expert tell me.
    thanks

    Hi kim,
    welcome to sdn.
    The qRFC Communication Model
    qRFC Properties and Possible Uses
    All types of applications are instructed to communicate with other applications. This communication may take place within an SAP system, with another SAP system, or with an application from a remote external system. An interface that can be used for dealing with this task is the Remote Function Call (RFC).  RFCs can be used to start applications in remote systems, and to execute particular functions.
    Whereas the first version of the RFC, the synchronous RFC, (sRFC) required both systems involved to be active in order to produce a synchronous communication, the subsequent generations of RFC had a greater range of features at their disposal (such as serialization, guarantee for one-time-only execution, and that the receiver system does not have to be available). These features were further enhanced through the queued RFC with inbound/outbound queue.  
    Communication between applications within an SAP system and also with a remote system can basically be achieved using the Remote Function Call (RFC). Here, the following scenarios are possible:
    ·         Communication between two independent SAP systems
    ·         Communication between a calling SAP system and an external receiving system
    ·         Communication between a calling external system and an SAP receiving system
    The following communication model shows what these communication scenarios may look like in reality. The actual sending process is still done by the tRFC (transactional Remote Function Call). Inbound and outbound queues are added to the tRFC, leaving us with a qRFC (queued Remote Function Call). The sender system is also called the client system, while the target system corresponds to the server system.
    Scenario 1: tRFC
    This scenario is appropriate is the data being sent is independent of each other. A calling application (or client) in system 1 uses a tRFC connection to a called application (or server) in system 2. In this scenario, data is transferred by tRFC, meaning that each function module sent to the target system is guaranteed to be executed one time only. You cannot define the sequence in which the function modules are executed, nor the time of execution. If an error occurs during the transfer, a batch job is scheduled, which sends the function module again after 15 minutes.
    Scenario 2: qRFC with outbound queue
    In this scenario, the sender system uses an outbound queue, to serialize the data that is being sent. This means that function modules which depend on each other (such as update and then change) are put into the outbound queue of the sender system, and are guaranteed to be sent to the target system one after each other and one time only. The called system (server) has no knowledge of the outbound queue in the sender system (client), meaning that in this scenario, every SAP system can also communicate with a non-SAP system. (Note: the programming code of the server system must not be changed. However, it must be tRFC-capable.)
    Scenario 3: qRFC with inbound queue (and outbound queue)
    In this scenario, as well as an outbound queue in the sender system (client), there is also an inbound queue in the target system (server). If a qRFC with inbound queue exists, this always means that an outbound queue exists in the sender system. This guarantees the sequence and efficiently controls the resources in the client system and server system.  The inbound queue only processes as many function modules as the system resources in the target system (server) at that time allow. This prevents a server being blocked by a client. A scenario with inbound queue in the server system is not possible, since the outbound queue is needed in the client system, in order to guarantee the sequence and to prevent individual applications from blocking all work processes in the client system.
    Properties of the Three Communication Types 
    To help you decide which communication type you should use in your system landscape for your requirements, the advantages of the three communication types are listed below:
           1.      tRFC: for independent function modules only
           2.      qRFC with outbound queue: guarantees that independent function modules are sent one after each other and one time only (serialization). Suitable for communication with non-SAP servers.
           3.      qRFC with inbound queue: in addition to the outbound queue in the client system, an inbound queue makes sure that only as many function modules are processed in the target system (server) as the current resources allow. Client and server system must be SAP systems. One work process is used for each inbound queue.
           The qRFC Communication Model
    Purpose
    Communication within an SAP system or with a remote system can take place using Remote Function Call (RFC). This enables the following scenarios:
    ·        Communication between two independent SAP systems
    ·        Communication between a calling SAP system and an external receiving system
    ·        Communication between a calling external SAP system and an SAP system as the receiving system
    Implementation Considerations
    The following communication model shows how these communication scenarios can occur in practice. tRFC (transactional Remote Function Call) is still responsible for actually sending communications. tRFC is preceded by inbound and outbound queues, which have led to the name qRFC (queued Remote Function Call). The sending system is called the client system, and the target system represents the server system.
    There are three data transfer scenarios:
    Scenario 1: tRFC
    This scenario is suitable if the data being sent is not interrelated. A calling application (or client) in system 1 uses a tRFC connection to a called application (or server) in system 2. In this scenario, the data is transferred using tRFC. This means that each function module sent to the target system is guaranteed to be processed once. The order in which the function modules are executed, and the time they are executed, cannot be determined. If a transfer error occurs, a background job is scheduled that resends the function module after a defined period of time.
    Scenario 2: qRFC with Outbound Queue
    In this scenario, the sending system uses an outbound queue to serialize the data being sent. This means that mutually dependent function modules are placed in the outbound queue of the sending system and are guaranteed to be sent in the correct sequence, and only once, to the receiving system.  The called system (server) has no knowledge of the outbound queue in the sending system (client). Using this scenario, every SAP system can communicate with a non-SAP system (the program code of the server system does not need to be changed, but it must be tRFC-compliant).
    Scenario 3: qRFC with Inbound Queue (and Outbound Queue)
    In this scenario, in addition to the outbound queue in the sending system (client), there is also an inbound queue in the target system (server). qRFC with an inbound queue always means that an outbound queue exists in the sending system. This guarantees that the sequence of communications is preserved, and at the same time the resources in the client and in the server system are controlled efficiently.  The inbound queue is processed using an inbound scheduler, which only processes as many queues in parallel as the current resources in the target system (server) will allow, This prevents a server from being blocked by a client.
    Features
    Features of the Three Communication Types
    To help you decide which communication types you need to implement according to your system landscape and your requirements, the advantages of the three types of communication are explained below:
    ·        tRFC
    Suitable only for independent function module calls; the sequence of the calls is not preserved
    ·        qRFC with outbound queue
    Function modules in a queue are guaranteed to be processed only once and in sequence (serialization). Also suitable for communication with non-SAP servers.
    ·        qRFC with inbound queue
    The function modules created in the outbound queue are transferred from the outbound queue to the inbound queue; the sequence of the function modules is preserved. An inbound scheduler processes the inbound queues in accordance with the specified resources. Both the client and the server system must be SAP systems. One work process is used for each inbound queue.
    Queued Remote Function Call (qRFC)
    Purpose
    All types of applications are instructed to communicate with other applications. This communication may take place within an SAP system, with another SAP system, or with an application from a remote external system. An interface that can be used for dealing with this task is the Remote Function Call (RFC).  RFCs can be used to start applications in remote systems, and to execute particular functions.
    Integration
    In contrast the first version of RFC, synchronous RFC (sRFC), which required both participating systems to be active to form synchronous communication, subsequent generations of RFC now provide a considerably extended range of functions (for example, serialization, guarantee that processing occurs once, and the receiving system does not have to be available). These features were further enhanced through the queued RFC with inbound/outbound queue.
    Contents:
    The information about qRFC is organized into the following main sections, with more detailed subsections:
    The qRFC Communication Model
    ·        qRFC with Outbound Queues
    ·        qRFC with Inbound Queues
    qRFC Administration
    ·        qRFC Administration: Introductory Example
    ·        Outbound Queue Administration
    ·        Inbound Queue Administration
    qRFC Programming
    ·        qRFC Programming: Introductory Example
    ·        Outbound Queue Programming
    ·        Inbound Queue Programming
    ·        qRFC API
    For an introduction to the new bgRFC (Background RFC), use the following links:
    bgRFC (Background RFC)
    ·        bgRFC Administration
    ·        bgRFC Programming
    Using Asynchronous Remote Function Calls
    Asynchronous remote function calls (aRFCs) are similar to transactional RFCs, in that the user does not have to wait for their completion before continuing the calling dialog. There are three characteristics, however, that distinguish asynchronous RFCs from transactional RFCs:
    ·        When the caller starts an asynchronous RFC, the called server must be available to accept the request.
    The parameters of asynchronous RFCs are not logged to the database, but sent directly to the server.
    ·        Asynchronous RFCs allow the user to carry on an interactive dialog with the remote system.
    ·        The calling program can receive results from the asynchronous RFC.
    You can use asynchronous remote function calls whenever you need to establish communication with a remote system, but do not want to wait for the functionu2019s result before continuing processing. Asynchronous RFCs can also be sent to the same system. In this case, the system opens a new session (or window). You can then switch back and for between the calling dialog and the called session
    To start a remote function call asynchronously, use the following syntax:
    CALL FUNCTION Remotefunction STARTING NEW TASK Taskname
    DESTINATION ...
    EXPORTING...
    TABLES   ...
    EXCEPTIONS...
    The following calling parameters are available:
    §         TABLES
    passes references to internal tables. All table parameters of the function module must contain values.
    §         EXPORTING
    passes values of fields and field strings from the calling program to the function module. In the function module, the corresponding formal parameters are defined as import parameters.
    §         EXCEPTIONS
    See Using Predefined Exceptions for RFCs
    RECEIVE RESULTS FROM FUNCTION Remotefunction is used within a FORM routine to receive the results of an asynchronous remote function call. The following receiving parameters are available:
    §         IMPORTING
    §         TABLES
    §         EXCEPTIONS
    The addition KEEPING TASK prevents an asynchronous connection from being closed after receiving the results of the processing. The relevant remote context (roll area) is kept for re-use until the caller terminates the connection.
    Transactional RFC (tRFC)
    Transactional RFC(tRFC, previously known as asynchronous RFC) is an asynchronous communication method that executes the called function module just once in the RFC server. The remote system need not be available at the time when the RFC client program is executing a tRFC. The tRFC component stores the called RFC function, together with the corresponding data, in the SAP database under a unique transaction ID (TID).
    If a call is sent, and the receiving system is down, the call remains in the local queue. The calling dialog program can proceed without waiting to see whether the remote call was successful. If the receiving system does not become active within a certain amount of time, the call is scheduled to run in batch.
    tRFC is always used if a function is executed as a Logical Unit of Work (LUW). Within a LUW, all calls
    ·         are executed in the order in which they are called
    ·         are executed in the same program context in the target system
    ·         run as a single transaction: they are either committed or rolled back as a unit.
    Implementation of tRFC is recommended if you want to maintain the transactional sequence of the calls.
    Disadvantages of tRFC
    ·       tRFC processes all LUWs independently of one another. Due to the amount of activated tRFC processes, this procedure can reduce performance significantly in both the send and the target systems.
    ·       In addition, the sequence of LUWs defined in the application cannot be kept. It is therefore impossible to guarantee that the transactions will be executed in the sequence dictated by the application. The only thing that can be guaranteed is that all LUWs are transferred sooner or later
    thanks
    karthik
    reward me if usefull

  • QRFC and TRFC Difference...

    Hello All,
                I need some help in understanding TRFC and QRFC in CRM middleware.
    Please answer the following  to help me in understandin the TRFC and QRFC:
    1. During data replication between CRM and R/3, when system will use QRFC and when it uses TRFC... I mean in which scenarios.....
    2. TRFC error's can be checked in transaction: SM58... During what context system will create the entry in SM58..
    Regards,
    Srini.

    Hi Srini,
    a qRFC is a serialized (queued) tRFC. Technically these two are the same, just that the qRFC makes sure that all predecessors of the contained LUW are already committed.
    Data replication via CRM Middleware always uses the qRFC.
    HTH
    Thomas

  • QRFC and tRFC outbound-queue

    in SXMB moni entries,
    when the message is
    for example inbound: IDOC and outbound: AENGINE, the queue seen is a tRFC outboud-queue,
    and when inbound: IDOC and Outbound: PE, the queue seen is a
    qRFC outbound-queue
    can u explain me why this>?
    but nowhere i can see inbound-queues in the MONI. means XBTI* or XBQI*...
    can u explain me why?

    Hi,
    Please see the below link
    Eo/EOIO?BE - Queue - ? - /people/sap.india5/blog/2006/01/03/xi-asynchronous-message-processing-understanding-xi-queues-part-i
    Please go through these links
    For queues in message mapping
    /people/venkat.donela/blog/2005/06/09/introduction-to-queues-in-message-mapping
    Here are the Queues for Asynchronous Message Processing
    http://help.sap.com/saphelp_nw2004s/helpdata/en/7b/94553b4d53273de10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f078394a-4469-2910-c4bf-853c75674694
    Regards
    Chilla

  • Relation between qRFC and SXMB_Moni tables.

    Hello Experts,
    I would like to know if there exists any rerference OR any link between the qRFC tables (TRFCQOUT / TRFCQIN) with the backend master tables of SXMB_MONI (SXMSPMAST, SXMSPERR, SXMSPHIST) ETC.
    This would give me an idea whether I can fetch out the niche details of the records/messages stuck in qRFC from SXMB_MONI.
    Kindly advice.
    Thanks in advance,
    Elizabeth.

    Hi Elizabeth, 
    Check this recent blog..may be useful to you
    SAP XI/PI: Alerts for Queue Errors
    /people/santhosh.kumarv/blog/2009/05/19/sap-xipi-alerts-for-queue-errors
    Regards,
    Sunil.

  • How to debug qRFC and Trfc.

    Hi All,
    how to debugging asynchronyos finction module and Transactional function module when call in Background?
    Regards,
    Anuj Jain

    Hi Anuj,
    In transaction SM50 (process overview), you can select a background process and choose Program/Mode -> Program -> Debugging from the menu.
    Debugging TRFC
    1. Go to the program 'ZREPORT' and go to the  CALL FUNCTION 'ZFUNCTIOn' IN BACKGROUND TASK .
    2. Keep breakpoint in 'ZREPORT' just before the CALL FUNCTION start .
    3. when you reach the breakpoint, Go to menu -- > Settings-> Display/Change debugger settings-> select the flag "tRFC (In Background Task): Block Sending".
    4. Execute SM58, Select the background task, and choose menu --->edit -
    >debug LUW, the "background task" will start and the debugger will stop at the FM.
    Let me know If any further queries
    Edited by: Raj on Jul 27, 2010 1:07 PM

  • InDesign Table Fit (Clear overflow, Height and Row fit)

    Hi All,
    I am using MagicFit.jsx for fit the table. But it does not clear the overflow and it does not fit the Height. I want to do all of this.
    Plz suggest.
    MagicFit.jsx
    function MagicFit(){
        app.scriptPreferences.version = 4.0;          //Because I am using CS5
        MagicFit_1()
        app.scriptPreferences.version = 6.0;
    function MagicFit_1(){
            MagicFit 2.1b for InDesign CS / CS2 -- 01/18/06               
            Fits to content the WIDTH of the selected text container(s)   
            Features:                                                     
            - Fits selected TextFrame(s) width to content                 
            - 1st call: "strict" fitting (preserve each lines length)    
            - 2nd call (within 2 secs) : "fluid" fitting (preserve height)
            - (NEW) Alternate fitting of table column(s) if selected      
            - (NEW) Compute a minimal width by parsing embedded objects   
            - (NEW) Runs on selected frames, CELLS, groups, insertion pt  
            Installation & usage:                                         
            0) !! CS2 users only !!                                       
               Rename this script file with .jsx extension                
               (activating extend script features)                        
            1) Put the present file into the Presets/Scripts/ subdir      
            2) Run InDesign, open a document and select object(s) to fit  
               (or put insertion point into)                              
            3) Run the script via Window > Automation > Scripts           
               and double-clic on MagicFit.js                             
               Alternate way: assign a keyboard shortcut to the script via
               Edit > Keyboard Shortcuts... > Product area:"Scripts"      
            Help (FR) : http://marcautret.free.fr/geek/indd/magicfit/     
                (sorry, thats a french web page!)                      
                    Feedbacks : [email protected]                    
        //            SETTINGS
        var LATENCE = 2;         // in seconds (default:2)
        var PRECISION = 0.5;    // in pts (default:0.5)
        var APP_INT_VERSION = parseInt(app.version);
        //            TOOLBOX FUNCTIONS
        /*void*/ function exitMessage(/*exception*/ ex){
            alert("Error:\n" + ex.toString());
            exit();
        //            DOCUMENT METHODS
        /*void*/ Document.prototype.setUnitsTo = function(/*units*/ newUnits){        // units can be single value (horiz=vert) or array(horizUnits, vertUnits)
            var arrUnits = (newUnits.length) ? newUnits : new Array(newUnits,newUnits);
            this.viewPreferences.horizontalMeasurementUnits = arrUnits[0];
            this.viewPreferences.verticalMeasurementUnits = arrUnits[1];
        /*arr2*/ Document.prototype.getUnits = function(){
            return(Array(
                this.viewPreferences.horizontalMeasurementUnits,
                this.viewPreferences.verticalMeasurementUnits));
        /*bool*/ Document.prototype.withinDelay = function(){
            if (this.label)
                return( (Date.parse(Date())-this.label) <= LATENCE*1000 );
            return(false);
        /*void*/ Document.prototype.storeTimeStamp = function(){
            this.label = Date.parse(Date()).toString();
        //            GENERIC METHODS (OBJECT LEVEL)
        // Returns the "fittable-container" corresponding to THIS
        // Return array or collection HorizFit-compliant
        // NULL if failure
        /*arr*/ Object.prototype.asObjsToFit = function(){
            switch(this.constructor.name){
                case "TextFrame" :            // textframe -> singleton this
                    return(Array(this));
                case "Cell" :                // cells -> parent columns
                    var r = new Array();
                    // !! [CS1] Cell::parentColumn === Cell !!
                    // !! [CS2] Cell::parentColumn === Column !!
                    // !! [CS2] Cells::lastItem().parentColumn BUG !!
                    var c0 = this.cells.firstItem().name.split(":")[0];
                    var c1 = this.cells.lastItem().name.split(":")[0];
                    for (var i=c0 ; i<=c1; i++)
                        r.push(this.parent.columns[i]);
                    return(r);
                case "Table" /*CS2*/ :        // table -> columns
                    return(this.columns);
                case "Group" :                // group -> textFrames
                    return((this.textFrames.length>0) ? this.textFrames : null);
                case "Text" :                // selection is Text or InsertionPoint
                case "InsertionPoint" :        // -> run on container
                    var textContainer = this.getTextContainer();
                    return((textContainer) ? textContainer.asObjsToFit() : null);
                default:
                    return(null);
        // Returns Text's or InsertionPoint's container :
        // Type returned: TextFrame or Cell - NULL if failure
        /*obj*/ Object.prototype.getTextContainer = function(){
            try{ // try...catch because of CS2 behaviour
                if (this.parent.constructor.name == "Cell")           
                    return(this.parent);
                if (this.parentTextFrames)        // plural in CS2
                    return(this.parentTextFrames[0]);       
                if (this.parentTextFrame)    // single in CS1
                    return(this.parentTextFrame);
                return(null);
            }catch(ex) {return(null);}
        // Parse embedded "objects": tables, pageitems [including graphics]
        // and returns the max width
        // !! All parsed objects have to provide a computeWidth method !!
        /*int*/ Object.prototype.computeIncludedObjectsWidth = function(){
            var objsNames = new Array("pageItems","tables"); // could be extended
            var objsWidth = 0;
            var w = 0;
            for (var j=objsNames.length-1 ; j>=0 ; j--){
                for (var i=this[objsNames[j]].length-1 ; i>=0 ; i--){
                    try{
                        w = this[objsNames[j]][i].computeWidth({VISIBLE:true});
                    }catch(ex){
                        w=0;
                    if (w > objsWidth) objsWidth=w;
            return(objsWidth);
        // Generic computeWidth method for bounded objects
        // VISIBLE true -> external width
        // VISIBLE false -> internal width
        /*int*/ Object.prototype.computeWidth = function(/*bool*/ VISIBLE){
            if (VISIBLE){
                if (this.visibleBounds)
                    return(this.visibleBounds[3]-this.visibleBounds[1]);
            else{
                if (this.geometricBounds)
                    return(this.geometricBounds[3]-this.geometricBounds[1]);
            return(0);
        // Override Object::computeWidth for Table : returns simply the width
        /*int*/ Table.prototype.computeWidth = function(){
            return(this.width);
        // Returns chars count for each LINE of this (-> array)
        // empty array  IF  this.lines==NULL  OR  this.lines.length==0
        /*arr*/ Object.prototype.createLinesSizesArray = function(){
            r = new Array();
            if (this.lines)
                for (var i=this.lines.length-1; i>=0 ; i--)
                    r.unshift(this.lines[i].characters.length);
            return(r);
        // Compare chars count beetween THIS and arrSizes argument
        // (generic method just presuming that THIS have lines prop.)
        // -> TRUE if isoceles, FALSE if not
        /*bool*/ Object.prototype.isoceleLines = function(/*arr*/ arrSizes){
            if (this.lines.length != arrSizes.length) return(false);
            for (var i=arrSizes.length-1 ; i>=0 ; i--)
                if (arrSizes[i] != this.lines[i].characters.length)
                    return(false);
            return(true);
        //            TEXTFRAME METHODS
        // intanciate the part of the abstract process for TextFrames
        /*bool*/ TextFrame.prototype.isEmpty = function(){
            return(this.characters.length==0);
        /*bool*/ TextFrame.prototype.isOverflowed = function(){
            return(this.overflows);
        /*int*/ TextFrame.prototype.getWidth = function(){
            return(this.computeWidth({VISIBLE:false}));
        // Redim the frame in width by widthOffset
        /*void*/ TextFrame.prototype.resizeWidthBy = function(/*int*/ widthOffset){
            this.geometricBounds = Array(
                this.geometricBounds[0],
                this.geometricBounds[1],
                this.geometricBounds[2],
                this.geometricBounds[3] + widthOffset);
        // Returns the minWidth of the frame according to embedded content
        // and inner space
        // inner width space
        /*int*/ TextFrame.prototype.computeMinWidth = function(){
            var inSpace = this.textFramePreferences.insetSpacing;
            var inWidth = (inSpace.length) ?
                inSpace[1] + inSpace[3] :    // distinct left & right inspace
                2*inSpace;                    // global inspace
            return(this.computeIncludedObjectsWidth() + inWidth);
        /*int*/ TextFrame.prototype.getCharsCount = function(){
            return(this.characters.length);
        /*int*/ TextFrame.prototype.getLinesCount = function(){
            return(this.lines.length);
        // Return chars count BY LINE (-> array)
        /*arr*/ TextFrame.prototype.getLinesSizes = function(){
            return(this.createLinesSizesArray());
        // YES -> -1  , NOT -> 1
        /*int*/ TextFrame.prototype.preserveCharsCount = function(/*int*/ charsCount){
            return( (this.characters.length != charsCount) ? 1 : -1 );
        // Indicates whether:
        // - chars count equals linesCount
        // - frame DOES NOT overflow
        // YES -> -1  , NOT -> 1
        /*int*/ TextFrame.prototype.preserveLinesCount = function(/*int*/ linesCount){
            return( ((this.overflows) || (this.lines.length != linesCount)) ? 1 : -1 );
        // Indicates whether:
        // each x line isoceles linesSizes[x]
        // YES -> -1  , NOT -> 1
        /*int*/ TextFrame.prototype.preserveLinesSizes = function(/*arr*/ linesSizes){
            return( (this.isoceleLines(linesSizes)) ? -1 : 1 );
        //            COLUMN METHODS
        // intanciate the part of the abstract process for Columns
        /*bool*/ Column.prototype.isEmpty = function(){
            for (var i=this.cells.length-1; i>=0 ; i--)
                if (this.cells[i].characters.length>0) return(false);
            return(true);
        // Indicates whether AT LEAST a cell overflows
        // !! We can't trust Column::overflows !!
        /*bool*/ Column.prototype.isOverflowed = function(){
            for (var i=this.cells.length-1 ; i>= 0 ; i--)
                if (this.cells[i].overflows) return(true);
            return(false);
        /*int*/    Column.prototype.getWidth = function(){
            return(this.width);
        // Redim the column width by widthOffset
        // !! we HAVE TO update the display after resizing !!
        /*void*/ Column.prototype.resizeWidthBy = function(/*int*/ widthOffset){
            this.width += widthOffset;
            // updates the display
            if (APP_INT_VERSION > 3)        // CS2+
                this.recompose();
            else{
                // CS -- thx to Tilo for this hack --
                for(var i = this.cells.length - 1 ; i >= 0 ; i-- ){
                    // Comparing the cell contents against null
                    // seems to internally recompose the cell!
                    if (this.cells[i].contents == null) {}
        // Returns the minWidth of the column according to embedded content
        // and inner space
        /*int*/ Column.prototype.computeMinWidth = function(){
            var iCell = null;
            var w = 0;
            var r = 0;
            for (var i=this.cells.length-1 ; i>= 0 ; i--){
                iCell = this.cells[i];
                w = iCell.computeIncludedObjectsWidth() +
                    iCell.leftInset + iCell.rightInset;
                if (w > r) r = w;
            return(r);
        // Returns SIGNED chars count BY CELL (negatif if overflows)
        /*arr*/ Column.prototype.getCharsCount = function(){
            var r = new Array();
            var sgn = 0;
            for (var i=this.cells.length-1 ; i>= 0 ; i--){
                sgn = (this.cells[i].overflows) ? -1 : 1;
                r.unshift(sgn * this.cells[i].characters.length);
            return(r);
        // Returns lines count BY CELL
        /*arr*/ Column.prototype.getLinesCount = function(){
            var r = new Array();
            for (var i=this.cells.length-1 ; i>= 0 ; i--)
                r.unshift(this.cells[i].lines.length);
            return(r);
        // Matrix: returns the chars count BY LINE / BY CELL
        /*bi-arr*/ Column.prototype.getLinesSizes = function(){
            var r = new Array();
            for (var i=this.cells.length-1 ; i>= 0 ; i--)
                    r.unshift(this.cells[i].createLinesSizesArray());
            return(r);
        // Indicates whether:
        // overflow sign BY CELL x equals sgn(charsCount[x])
        // YES -> -1  , NO -> 1
        /*int*/ Column.prototype.preserveCharsCount = function(/*arr*/ charsCount){
            var sgn = 0;
            for (var i=this.cells.length-1 ; i>= 0 ; i--){
                sgn = (this.cells[i].overflows) ? -1 : 1;
                if (sgn * charsCount[i] < 0) return(1);
            return(-1);
        // Indicates whether:
        // - lines count BY CELL x equals linesCount[x]
        // - no cell overflows
        // YES -> -1  , NO -> 1
        /*int*/ Column.prototype.preserveLinesCount = function(/*arr*/ linesCount){
            for (var i=this.cells.length-1 ; i>= 0 ; i--){
                if (this.cells[i].overflows) return(1);
                if (this.cells[i].lines.length != linesCount[i]) return(1);
            return(-1);
        // Indicates whether:
        // - in each CELL x, each LIGNE y isoceles linesSizes[x][y]
        // (if a cell overflows, returns 1)
        // YES -> -1  , NO -> 1
        /*int*/ Column.prototype.preserveLinesSizes = function(/*bi-arr*/ linesSizes){
            for (var i=this.cells.length-1 ; i>= 0 ; i--){
                if (this.cells[i].overflows) return(1);
                if (this.cells[i].isoceleLines(linesSizes[i]) == false) return(1);
            return(-1);
        //            METHODES CENTRALES
        // !! [CS2 only] Prevents a strange crash on wide table columns selection !!
        // !! Thx to Tilo for this hack --
        /*void*/ Object.prototype.manageFit = function(/*bool*/ FLUIDFITTING){
            if (APP_INT_VERSION>=4){
                $.gc();
            // NOP if empty object
            if (this.isEmpty()) return;
            // min width to preserve
            var minWidth = this.computeMinWidth();
            // let's go!
            this.processFit(FLUIDFITTING, minWidth);
        // Fits this object
        // if FLUIDFITTING -> fluid fitting, else: strict fitting
        // minWidth sets the threshold
        /*void*/ Object.prototype.processFit = function(/*bool*/ FLUIDFITTING, /*int*/ minWidth){
            if (FLUIDFITTING){ // FLUID FITTING
                if (this.isOverflowed()){ // NB : overflowed CELLS are "transparent"
                    var charsCount = this.getCharsCount();
                    var evalFlag = function(thisObj){return(thisObj.preserveCharsCount(charsCount));}
                else{
                    var linesCount = this.getLinesCount();
                    evalFlag = function(thisObj){return(thisObj.preserveLinesCount(linesCount));}
            else{ // STRICT FITTING
                  // NB : overflowed columns are "intouchable"
                if ((this.constructor.name=="Column") && (this.isOverflowed()))
                    return;
                var linesSizes = this.getLinesSizes();
                var evalFlag = function(thisObj){return(thisObj.preserveLinesSizes(linesSizes));}
            // DICHOTOMIC LOOP
            var sgnFLAG = -1;
            var w = ( this.getWidth() - minWidth ) / 2;
            while (w >= PRECISION){
                // resize width by +/- w
                this.resizeWidthBy(sgnFLAG*w);
                // +1 = increase | -1 = reduce
                sgnFLAG = evalFlag(this);
                // divide
                w = w/2;
            // exit with sgnFLAG==+1 -> undo last reduction -> +2w
            if (sgnFLAG>0) this.resizeWidthBy(2*w);
        // MAIN PROGRAM
        if ( app.documents.length > 0 ){
            if ( app.activeWindow.selection.length > 0 ){
                try{
                    var thisDoc = app.activeDocument;
                    var FLUIDFLAG = thisDoc.withinDelay();
                    var memUnits = thisDoc.getUnits();
                    thisDoc.setUnitsTo(MeasurementUnits.points);
                    var selObjs = app.activeWindow.selection;
                    var objsToFit = null;
                    for (var i=selObjs.length-1 ; i>=0 ; i--){
                        objsToFit = selObjs[i].asObjsToFit();
                        if (objsToFit){
                            for (var j=objsToFit.length-1 ; j>=0 ; j--)
                                objsToFit[j].manageFit(FLUIDFLAG);
                    thisDoc.setUnitsTo(memUnits);
                    thisDoc.storeTimeStamp();
                }catch(ex){
                    thisDoc.setUnitsTo(memUnits);
                    exitMessage(ex);
            else
                alert("No object selected!");
        else
            alert("No document opened!");

    InDesign table cells don't break across pages the way they do in Word. It's all or nothing.

  • DAC: Clearing Failed Execution Plans and BAW Tables

    Hi all,
    Thank you for taking the time to review this post.
    Background
    Oracle BI Applications 7.9.6 Financial Analytics
    OLTP Source: E-Business Suite 11.5.10
    Steps Taken
    1. In DAC I have create a New Source Container based on Oracle 11.5.10
    2. I have updated the parameters in the Source System parameters
    3. Then I created a new Execution Plan as a copy of the Financials_Oracle 11.5.10 record and checked Full Load Always
    4. Added new Financials Subject Areas so that they have the new Source System
    5. Updated the Parameters tab with the new Source System and Generated the Parameters
    6. Build a new Execution Plan - Fails
    Confirmation for Rerun
    I want to confirm that the correct steps to Rerun an Execution Plan are as follows. I want to ensure that the OLTP (BAW) tables are truncated. I am experiencing duplicates in the W_GL_SEGMENTS_D (and DS) table even though there are no duplicates on the EBS.
    In DAC under the EXECUTE window do the following:
    - Navigate to the 'Current Run' tab.
    - Highlight the failed execution plan.
    - Right click and seleted 'Mark as completed.'
    - Enter the numbers/text in the box.
    Then:
    - In the top toolbar select Tools --> ETL Management --> Reset Data Sources
    - Enter the numbers/text in the boox.
    Your assistance is greatly appreciated.
    Kind Regards,
    Gary.

    Hi HTH,
    I can confirm that I do not have duplicates on the EBS side.
    I got the SQL Statement by:
    1. Open Mapping SDE_ORA_GL_SegmentDimension in the SDE_ORA11510_Adaptor.
    2. Review the SQL Statement in the Source Qualifier SQ_FND_FLEX_VALUES
    3. Run this SQL command against my EBS 11.5.10 source OLTP and the duplicates that are appearing in the W_GL_SEGMENT_DS table to not exist.
    SELECT
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
    FND_FLEX_VALUES.FLEX_VALUE,
    MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
    MAX(FND_FLEX_VALUES.LAST_UPDATE_DATE),
    MAX(FND_FLEX_VALUES.LAST_UPDATED_BY),
    MAX(FND_FLEX_VALUES.CREATION_DATE),
    MAX(FND_FLEX_VALUES.CREATED_BY),
    MAX(FND_FLEX_VALUES.START_DATE_ACTIVE),
    MAX(FND_FLEX_VALUES.END_DATE_ACTIVE),
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE LAST_UPDATE_DATE1
    FROM
    FND_FLEX_VALUES, FND_FLEX_VALUE_SETS, FND_FLEX_VALUES_TL
    WHERE
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID = FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID AND FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND
    FND_FLEX_VALUES_TL.LANGUAGE = 'US' AND
    (FND_FLEX_VALUES.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS') OR
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATE > TO_DATE('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS'))
    GROUP BY
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUE_SETS.FLEX_VALUE_SET_NAME,
    FND_FLEX_VALUES.FLEX_VALUE,
    FND_FLEX_VALUE_SETS.LAST_UPDATE_DATEHowever, one thing that I noticed was that I wanted to validate what the value of parameter $$LAST_EXTRACT_DATE is being populated with.
    My investigation took me along the following route:
    Checked what was set up in the DAC (+Dac Build AN 10.1.3.4.1.20090415.0146, Build date: April 15 2009+):
    1. Design View -> Source System Parameters -> $$LAST_EXTRACT_DATE = Runtime Variable =@DAC_SOURCE_PRUNED_REFRESH_TIMESTAMP (+Haven't been able to track this Variable down!)+
    2. Setup View -> DAC System Properties -> InformaticaParameterFileLocation -> $INFA_HOME/server/infa_shared/SrcFiles
    Reviewing one of the log files for my failing Task:
    $INFA_HOME/server/infa_shared/SrcFiles/ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.txt
    I noticed that several variables near the bottom (including $$LAST_EXTRACT_DATE) have not been populated. This variable gets populated at Runtime but is there a log file that shows the value that gets populated? I would also have expected that there would have been a Subsitution Variable in the place of a Static Value.
    [SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full]
    $$ANALYSIS_END=01/01/2011 12:59:00
    $$ANALYSIS_END_WID=20110101
    $$ANALYSIS_START=12/31/1979 01:00:00
    $$ANALYSIS_START_WID=19791231
    $$COST_TIME_GRAIN=QUARTER
    $$CURRENT_DATE=03/17/2010
    $$CURRENT_DATE_IN_SQL_FORMAT=TO_DATE('2010-03-17', 'YYYY-MM-DD')
    $$CURRENT_DATE_WID=20100317
    $$DATASOURCE_NUM_ID=4
    $$DEFAULT_LOC_RATE_TYPE=Corporate
    $$DFLT_LANG=US
    $$ETL_PROC_WID=21147868
    $$FILTER_BY_SET_OF_BOOKS_ID='N'
    $$FILTER_BY_SET_OF_BOOKS_TYPE='N'
    $$GBL_CALENDAR_ID=WPG_Calendar~Month
    $$GBL_DATASOURCE_NUM_ID=4
    $$GLOBAL1_CURR_CODE=AUD
    $$GLOBAL1_RATE_TYPE=Corporate
    $$GLOBAL2_CURR_CODE=GBP
    $$GLOBAL2_RATE_TYPE=Corporate
    $$GLOBAL3_CURR_CODE=MYR
    $$GLOBAL3_RATE_TYPE=Corporate
    $$HI_DATE=TO_DATE('3714-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HI_DT=01/01/3714 12:00:00
    $$HR_ABSNC_EXTRACT_DATE=TO_DATE('1980-01-01 08:19:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HR_WRKFC_ADJ_SERVICE_DATE='N'
    $$HR_WRKFC_EXTRACT_DATE=01/01/1970
    $$HR_WRKFC_SNAPSHOT_DT=TO_DATE('2004-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$HR_WRKFC_SNAPSHOT_TO_WID=20100317
    $$Hint1=
    $$Hint_Tera_Post_Cast=
    $$Hint_Tera_Pre_Cast=
    $$INITIAL_EXTRACT_DATE=06/27/2009
    $$INVPROD_CAT_SET_ID=27
    $$INV_PROD_CAT_SET_ID10=
    $$INV_PROD_CAT_SET_ID1=27
    $$INV_PROD_CAT_SET_ID2=
    $$INV_PROD_CAT_SET_ID3=
    $$INV_PROD_CAT_SET_ID4=
    $$INV_PROD_CAT_SET_ID5=
    $$INV_PROD_CAT_SET_ID6=
    $$INV_PROD_CAT_SET_ID7=
    $$INV_PROD_CAT_SET_ID8=
    $$INV_PROD_CAT_SET_ID9=
    $$LANGUAGE=
    $$LANGUAGE_CODE=E
    $$LAST_EXTRACT_DATE=
    $$LAST_EXTRACT_DATE_IN_SQL_FORMAT=
    $$LAST_TARGET_EXTRACT_DATE_IN_SQL_FORMAT=
    $$LOAD_DT=TO_DATE('2010-03-17 19:27:10', 'YYYY-MM-DD HH24:MI:SS')
    $$LOW_DATE=TO_DATE('1899-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$LOW_DT=01/01/1899 00:00:00
    $$MASTER_CODE_NOT_FOUND=
    $$ORA_HI_DATE=TO_DATE('4712-12-31 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    $$PROD_CAT_SET_ID10=
    $$PROD_CAT_SET_ID1=2
    $$PROD_CAT_SET_ID2=
    $$PROD_CAT_SET_ID3=
    $$PROD_CAT_SET_ID4=
    $$PROD_CAT_SET_ID5=
    $$PROD_CAT_SET_ID6=
    $$PROD_CAT_SET_ID7=
    $$PROD_CAT_SET_ID8=
    $$PROD_CAT_SET_ID9=
    $$PROD_CAT_SET_ID=2
    $$SET_OF_BOOKS_ID_LIST=1
    $$SET_OF_BOOKS_TYPE_LIST='NONE'
    $$SOURCE_CODE_NOT_SUPPLIED=
    $$TENANT_ID=DEFAULT
    $$WH_DATASOURCE_NUM_ID=999
    $DBConnection_OLAP=DWRLY_OLAP
    $DBConnection_OLTP=ORA_11_5_10
    $PMSessionLogFile=ORA_11_5_10.DWRLY_OLAP.SDE_ORA11510_Adaptor.SDE_ORA_GLSegmentDimension_Full.logThe following snippet was discovered in the DAC logs for the failed Task (Log file: SDE_ORA_GLSegmentDimension_Full_DETAIL.log):
    First error code [7004]
    First error message: [TE_7004 Transformation Parse Warning [FLEX_VALUE_SET_ID || '~' || FLEX_VALUE]; transformation continues...]Finally I can confirm that there was a Task Truncate Table W_GL_LINKAGE_INFORMATION_GS that was successfully executed in Task SDE_ORA_GL_LinkageInformation_Extract.
    Any further guidance greatly appreciated.
    Kind Regards,
    Gary.

  • Difference between YVBUK and XVBUK tables

    Hello,
       Can somebody explain me the user of Y* and X* tables for application tables such as VBUK, LIKP etc. in user exits?
    According to note 415716 Y* tables store the condition of the record currently in the database and X* tables stores the changed value. But what happens if its a new record (UPDKZ = I), shouldnt Y* table be blank?
    We have the following code in the user exit
      INCLUDE YVUEPRZ1_DOC_SAVE_PREP_CARRIER                             *
    perform update only if insert or update carrier
    UPDKZ : Update indicator
    DATA : ls_vbpa LIKE xvbpa.
    LOOP AT xlikp.
      v_index = sy-tabix.
      CLEAR ls_vbpa.
      READ TABLE xvbpa INTO ls_vbpa
                       WITH KEY vbeln = xlikp-vbeln
                                parvw = 'SP'.
      CASE ls_vbpa-updkz.
        WHEN 'I' OR 'U'.
          MOVE-CORRESPONDING xlikp TO likp.
          MOVE-CORRESPONDING xlikp TO likpd.
    *--   Prepare changes of the delivery header
          PERFORM likp_bearbeiten_vorbereiten(sapfv50k).
          likp-yylfnr = ls_vbpa-lifnr.
    *--   Check and confirm changes
          PERFORM likp_bearbeiten(sapfv50k).
          MOVE-CORRESPONDING likp  TO xlikp.
          MOVE-CORRESPONDING likpd TO xlikp.
          MODIFY xlikp INDEX v_index.
          IF t180-trtyp = 'V'. "Modification
            xvbuk-uvk03  = 'D'.
            xvbuk-updkz  = 'U'.
            MODIFY xvbuk TRANSPORTING uvk03 updkz WHERE vbeln = xlikp-vbeln.
          ENDIF.
        WHEN 'D'.
          MOVE-CORRESPONDING xlikp TO likp.
          MOVE-CORRESPONDING xlikp TO likpd.
    *--   Prepare changes of the delivery header
          PERFORM likp_bearbeiten_vorbereiten(sapfv50k).
          CLEAR likp-yylfnr.
    *--   Check and confirm changes
          PERFORM likp_bearbeiten(sapfv50k).
          MOVE-CORRESPONDING likp  TO xlikp.
          MOVE-CORRESPONDING likpd TO xlikp.
          MODIFY xlikp INDEX v_index.
          IF t180-trtyp = 'V'. "Modification
            xvbuk-uvk03  = 'D'.
            xvbuk-updkz  = 'U'.
            MODIFY xvbuk TRANSPORTING uvk03 updkz WHERE vbeln = xlikp-vbeln.
          ENDIF.
      ENDCASE.
    ENDLOOP.
    Now the problem is that SAP is saying that the table YVBUK also has to have the same record as XVBUK but I think its absurd for UPDKZ = I since there will be no database image for that.
    thanks

    To answer Peluka's question there are no entries in Y* table when UPDKZ = I or UPDKZ = U
    and I dont quite understand what Ferry Lianto is saying but I think its pretty much the same as above
    thanks for your help

  • Performance issue in BI due to direct query on BKPF and BSEG tables

    Hi,
    We had a requirement that FI document number fieldshould be extracted in BI.
    Following code was written which has the correct logic but performance is bad.
    It fetched just 100 records in more than 4-5 hrs.
    The reason is there was a direct qury written on BSEG and BKPF tables(without WHERE clause).
    Is there any way to improve this code like adding GJAHR field  in where clause? I dont want to change the logic.
    Following is the code:
    WHEN '0CO_OM_CCA_9'." Data Source
        TYPES:BEGIN OF ty_bkpf,
        belnr TYPE bkpf-belnr,
        xblnr TYPE bkpf-xblnr,
        bktxt TYPE bkpf-bktxt,
        awkey TYPE bkpf-awkey,
        bukrs TYPE bkpf-bukrs,
        gjahr TYPE bkpf-gjahr,
        AWTYP TYPE bkpf-AWTYP,
        END OF ty_bkpf.
        TYPES : BEGIN OF ty_bseg1,
        lifnr TYPE bseg-lifnr,
        belnr TYPE bseg-belnr,
        bukrs TYPE bseg-bukrs,
        gjahr TYPE bseg-gjahr,
        END OF ty_bseg1.
        DATA: it_bkpf TYPE STANDARD TABLE OF ty_bkpf,
        wa_bkpf TYPE ty_bkpf,
        it_bseg1 TYPE STANDARD TABLE OF ty_bseg1,
        wa_bseg1 TYPE ty_bseg1,
        l_s_icctrcsta1 TYPE icctrcsta1.
        "Extract structure for Datasoure 0co_om_cca_9.
        DATA: l_awkey TYPE bkpf-awkey.
        DATA: l_gjahr1 TYPE gjahr.
        DATA: len TYPE i,
        l_cnt TYPE i.
        l_cnt = 10.
        tables : covp.
        data : ref_no(20).
        SELECT lifnr
        belnr
        bukrs
        gjahr
        FROM bseg
        INTO TABLE it_bseg1.
        DELETE ADJACENT DUPLICATES FROM it_bseg1 COMPARING belnr gjahr .
        SELECT belnr
        xblnr
        bktxt
        awkey
        bukrs
        gjahr
        AWTYP
        FROM bkpf
        INTO TABLE it_bkpf.
        IF sy-subrc EQ 0.
          CLEAR: l_s_icctrcsta1,
          wa_bkpf,
          l_awkey,
          wa_bseg1.
          LOOP AT c_t_data INTO l_s_icctrcsta1.
            MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
          select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
          WHERE belnr = l_s_icctrcsta1-belnr.
          if sy-subrc = 0.
              if COVP-AWORG is initial.
           concatenate l_s_icctrcsta1-refbn '%' into ref_no.
                  READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
                  l_s_icctrcsta1-refbn
                  awtyp = COVP-AWTYP
                  gjahr = l_gjahr1.
            IF sy-subrc EQ 0.
              MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
              MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
              MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
              MODIFY c_t_data FROM l_s_icctrcsta1.
              READ TABLE it_bseg1 INTO wa_bseg1
              WITH KEY
              belnr = wa_bkpf-belnr
              bukrs = wa_bkpf-bukrs
              gjahr = wa_bkpf-gjahr.
              IF sy-subrc EQ 0.
                MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
                MODIFY c_t_data FROM l_s_icctrcsta1.
                CLEAR: l_s_icctrcsta1,
                wa_bseg1,
                l_gjahr1.
              ENDIF.
            ENDIF.
                ELSE. " IF AWORG IS NOT BLANK -
                concatenate l_s_icctrcsta1-refbn COVP-AWORG into ref_no.
                READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(20) =
                ref_no
                awtyp = COVP-AWTYP
                gjahr = l_gjahr1.
            IF sy-subrc EQ 0.
              MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
              MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
              MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
              MODIFY c_t_data FROM l_s_icctrcsta1.
              READ TABLE it_bseg1 INTO wa_bseg1
              WITH KEY
              belnr = wa_bkpf-belnr
              bukrs = wa_bkpf-bukrs
              gjahr = wa_bkpf-gjahr.
              IF sy-subrc EQ 0.
                MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
                MODIFY c_t_data FROM l_s_icctrcsta1.
                CLEAR: l_s_icctrcsta1,
                wa_bseg1,
                l_gjahr1.
              ENDIF.
            ENDIF.
               endif.
          endif.
            CLEAR: l_s_icctrcsta1.
            CLEAR: COVP, REF_NO.
          ENDLOOP.
        ENDIF.

    Hello Amruta,
    I was just looking at your coding:
    LOOP AT c_t_data INTO l_s_icctrcsta1.
    MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
    select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
    WHERE belnr = l_s_icctrcsta1-belnr.
    if sy-subrc = 0.
    if COVP-AWORG is initial.
    concatenate l_s_icctrcsta1-refbn '%' into ref_no.
    READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
    l_s_icctrcsta1-refbn
    awtyp = COVP-AWTYP
    gjahr = l_gjahr1.
    Here you are interested in those BKPF records that are related to the contents of c_t_data internal table.
    I guess that this table does not contain millions of entries. Am I right?
    If yes, the the first step would be to pre-select COVP entries:
    select BELNR AWORG AWTYP into lt_covp from COVP
    for all entries in c_t_data
    where belnr = c_t_data-belnr.
    sort lt_covp by belnr.
    Once having this data ready, you build an internal table for BKPF selection:
    LOOP AT c_t_data INTO l_s_icctrcsta1.
      clear ls_bkpf_sel.
      ls_bkpf_sel-awkey(10) = l_s_icctrcsta1-refbn.
      read table lt_covp with key belnr = l_s_icctrcsta1-belnr binary search.
      if sy-subrc = 0.
        ls_bkpf_sel-awtyp = lt_covp-awtyp.
      endif.
      ls_bkpf_sel-gjahr = l_s_icctrcsta1-fiscper(4).
      insert ls_bkpf_sel into table lt_bkpf_sel.
    ENDLOOP.
    Now you have all necessary info to read BKPF:
    SELECT
    belnr
    xblnr
    bktxt
    awkey
    bukrs
    gjahr
    AWTYP
    FROM bkpf
    INTO TABLE it_bkpf
    for all entries in lt_bkpf_sel
    WHERE
      awkey = lt_bkpf_sel-awkey and
      awtyp = lt_bkpf_sel-awtype and
      gjahr = lt_bkpf_sel-gjahr.
    Then you can access BSEG with the bukrs, belnr and gjahr from the selected BKPF entries. This will be fast.
    Moreover I would even try to make a join on DB level. But first try this solution.
    Regards,
      Yuri

  • BAPI FOR Deleting a Schedule Line Item from EKES and EKET tables

    Dear All,
    I would like to for deleting one of the line item from EKES (Po Confirmation ) and the respective line item from the EKET(PO Schedule Line Item Table).
    Assume that am allowing the user to select the lineitem from zprogram screen and collecting the PO and Its LIne Item details in an internal table.
    Can i Use
    <b>BAPI_PO_Change</b> , if so , can anybody tell me the steps to follow to use this bapi for deleting the PO lineItems, since i m going to try BAPI for First time.
    pls help me out
    Message was edited by: Raja K.P

    Hi raja ,
    loop at iekko1.
        w_index = sy-tabix.
        item-po_item   = itemx-po_item   = iekko1-ebelp.
        item-quantity  = iekko1-mng01.
        itemx-quantity = iekko1-mng01.
        if iekko1-wamng = iekko1-wemng.
        itemx-no_more_gr = item-no_more_gr = 'X'.
        else.
        itemx-no_more_gr = item-no_more_gr = ''.
        endif.
        append item.
        append itemx.
          clear return[].
          call function 'BAPI_PO_CHANGE'
               exporting
                    purchaseorder = iekko1-ebeln
               tables
                    return        = return
                    poitem        = item
                    poitemx       = itemx.
          if return[] is initial.
          commit work and wait.
          call function 'DEQUEUE_ALL'.
    search for deletion fields which u have to mark 'X'.
    before calling  this BAPi u have to lock the PO by using ENQUEUE.
    <b>
    FU BAPI_PO_CHANGE
    Text
    Change purchase order
    Functionality
    Function module BAPI_PO_CHANGE enables you to change purchase orders. The Change method uses the technology behind the online transaction ME22N.
    Alternatively, the IDoc type PORDCH1 is available. The data from this IDoc populates the interface parameters of the function module BAPI_PO_CHANGE.
    Functionality in Detail
    Authorization
    When you create (activity 02) an Enjoy purchase order, the following authorization objects are checked:
    M_BEST_BSA (document type in PO)
    M_BEST_EKG (purchasing group in PO)
    M_BEST_EKO (purchasing organization in PO)
    M_BEST_WRK (plant in PO)
    Controlling adoption of field values via X bar
    For most tables, you can use your own parameters in the associated X bar (e.g. PoItemX) to determine whether fields are to be set initial, values inserted via the interface, or default values adopted from Customizing or master records, etc. (for example, it is not mandatory to adopt the material group from an underlying requisition - you can change it with the BAPI).
    Transfer
    Purchase order number
    The PurchaseOrder field uniquely identifies a purchase order. This field must be populated in order to carry out the Change method.
    Header data
    The header data of the Enjoy purchase order is transferred in table PoHeader.
    Item data
    The item data of the Enjoy purchase order is stored in the tables PoItem (general item data). Changes regarding quantity and delivery date are to be made in the table PoSchedule.
    Use the table PoAccount to change the account assignment information.
    Services and limits
    Changes to existing items cannot be carried out with the Change method. It is only possible to create new items.
    Conditions
    Conditions are transferred in the table PoCond; header conditions in the table PoCondHeader. A new price determination process can be initiated via the parameter CALCTYPE in the table PoItem.
    Vendor and delivery address
    The vendor address in the table PoAddrVendor and the delivery address in the table PoAddrDelivery can only be replaced by another address number that already exists in the system (table ADRC). Changes to address details can only be made using the method BAPI_ADDRESSORG_CHANGE.
    Partner roles
    You can change all partners except the partner role "vendor" via the table PoPartner.
    Export/import data
    Export/import data can be specified per item in the table PoExpImpItem. Foreign trade data can only be transferred as default data for new items. Changes to the export/import data of existing items are not possible.
    Texts
    Header and item texts can be transferred in the tables PoTextHeader and PoTextItem. Texts for services are imported in the table PoServicesText. Texts can only be replaced completely.
    Version Management
    You can make use of the Version Management facility via the table AllVersions.
    Return
    If the PO was changed successfully, the header and item tables are populated with the information from the PO.
    Return messages
    Messages are returned in the parameter Return. This also contains information as to whether interface data has been wrongly or probably wrongly (heuristical interface check) populated. If a PO has been successfully created, the PO number is also placed in the return table with the appropriate message.
    Restrictions
    With this function module, it is not possible to:
    Create subcontracting components (you can only use existing ones)
    Create configurations (you can only use existing ones)
    Change message records (table NAST) and additional message data (this data can only be determined via the message determination facility (Customizing))
    Attach documents to the purchase order
    Change foreign trade data
    Change service data
    Change or reexplode BOMs
    A firewall prevents the manipulation of data that is not changeable in Purchasing according to the business logic of the purchase order (e.g. PO number, vendor, etc.).
    PO items with an invoicing plan cannot be created or changed using the BAPIs
    In this connection, please refer to current information in Note 197958.
    To change addresses with numbers from Business Address Services (cantral address management), please use the function module BAPI_ADDRESSORG_CHANGE.
    To change variant configurations, please use the function module BAPI_UI_CHANGE. More information is available in the BAPI Explorer under the Logistics General node.
    In the case of changes that are to be made via the BAPI_PO_CHANGE, a firewall first checks whether the relevant fields are changeable. This approach follows that of the online transaction. Here it is not possible to change the vendor or the document type, for example.
    Example
    Example of changes made to a purchase order with:
    1. Change in header data
    2. Change in item
    3. Change in delivery schedule
    4. Change in account assignment
    5. Change in conditions
    6. Change in partners
    Parameter: PURCHASEORDER 4500049596
    Parameter: POHEADER
    PMNTTRMS = 0002
    PUR_GROUP = 002
    Parameter: POHEADERX
    PMNTTRMS = X
    PUR_GROUP = X
    Parameter: POITEM
    PO_ITEM = 00001
    CONF_CTRL = 0001
    Parameter: POITEMX
    PO_ITEM = 00001
    PO_ITEMX = X
    CONF_CTRL =  X
    Parameter: POSCHEDULE
    PO_ITEM = 00001
    SCHED_LINE = 0001
    QUANTITY = 10.000
    PO_ITEM = 00001
    SCHED_LINE = 0003
    DELETE_IND =  X
    Parameter: POSCHEDULEX
    PO_ITEM =  00001
    SCHED_LINE =  0001
    PO_ITEMX =  X
    SCHED_LINEX =  X
    QUANTITY =  X
    PO_ITEM =  00001
    SCHED_LINE =  0003
    PO_ITEMX =  X
    SCHED_LINEX =  X
    DELETE_IND = X
    Parameter: POACCOUNT
    PO_ITEM = 00001
    SERIAL_NO = 01
    GL_ACCOUNT = 0000400020
    Parameter: POACCOUNTX
    PO_ITEM = 00001
    SERIAL_NO = 01
    PO_ITEMX = X
    SERIAL_NOX = X
    GL_ACCOUNT = X
    Parameter: POCOND
    ITM_NUMBER = 000001
    COND_TYPE = RA02
    COND_VALUE = 2.110000000
    CURRENCY = %
    CHANGE_ID = U
    Parameter: POCONDX
    ITM_NUMBER = 000001
    COND_ST_NO = 001
    ITM_NUMBERX = X
    COND_ST_NOX = X
    COND_TYPE = X
    COND_VALUE = X
    CURRENCY = X
    CHANGE_ID = X
    Parameter: POPARTNER
    PARTNERDESC =  GS
    LANGU =  EN
    BUSPARTNO = 0000001000
    Help in the Case of Problems
    1. Note 197958 lists answers to frequently asked questions (FAQs). (Note 499626 contains answers to FAQs relating to External Services Management.)
    2. If you have detected an error in the function of a BAPI, kindly create a reproducible example in the test data directory in the Function Builder (transaction code SE37). Note 375886 tells you how to do this.
    3. If the problem persists, please create a Customer Problem Message for the componente MM-PUR-PO-BAPI, and document the reproducible example where necessary.
    Customer Enhancements
    The following user exits (function modules) are available for the BAPI BAPI_PO_CREATE1:
    EXIT_SAPL2012_001 (at start of BAPI)
    EXIT_SAPL2012_003 (at end of BAPI)
    The following user exits (function modules) are available for the BAPI BAPI BAPI_PO_CHANGE:
    EXIT_SAPL2012_002 (at start of BAPI)
    EXIT_SAPL2012_004 (at end of BAPI)
    These exits belong to the enhancement SAPL2012 (see also transaction codes SMOD and CMOD).
    There is also the option of populating customer-specific fields for header, item, or account assignment data via the parameter EXTENSIONIN.
    Further Information
    1. Note 197958 contains up-to-date information on the purchase order BAPIs.
    2. If you test the BAPIs BAPI_PO_CREATE1 or BAPI_PO_CHANGE in the Function Builder (transaction code SE37), no database updates will be carried out. If you need this function, please take a look at Note 420646.
    3. The BAPI BAPI_PO_GETDETAIL serves to read the details of a purchase order. The BAPI cannot read all details (e.g. conditions). However, you can use the BAPI BAPI_PO_CHANGE for this purpose if only the document number is populated and the initiator has change authorizations for purchase orders.
    4. Frequently used BAPIs for purchase orders are BAPI_PO_CREATE, BAPI_PO_CREATE1, BAPI_PO_CHANGE, BAPI_PO_GETDETAIL, BAPI_PO_GETITEMS, BAPI_PO_GETITEMSREL, and BAPI_PO_GETRELINFO.
    5. For more information on purchase orders, refer to the SAP library (under MM Purchasing -> Purchase Orders) or the Help for the Enjoy Purchase Order, or choose the path Tools -> ABAP Workbench -> Overview -> BAPI Explorer from the SAP menu.
    Parameters
    PURCHASEORDER
    POHEADER
    POHEADERX
    POADDRVENDOR
    TESTRUN
    MEMORY_UNCOMPLETE
    MEMORY_COMPLETE
    POEXPIMPHEADER
    POEXPIMPHEADERX
    VERSIONS
    NO_MESSAGING
    NO_MESSAGE_REQ
    NO_AUTHORITY
    NO_PRICE_FROM_PO
    EXPHEADER
    EXPPOEXPIMPHEADER
    RETURN
    POITEM
    POITEMX
    POADDRDELIVERY
    POSCHEDULE
    POSCHEDULEX
    POACCOUNT
    POACCOUNTPROFITSEGMENT
    POACCOUNTX
    POCONDHEADER
    POCONDHEADERX
    POCOND
    POCONDX
    POLIMITS
    POCONTRACTLIMITS
    POSERVICES
    POSRVACCESSVALUES
    POSERVICESTEXT
    EXTENSIONIN
    EXTENSIONOUT
    POEXPIMPITEM
    POEXPIMPITEMX
    POTEXTHEADER
    POTEXTITEM
    ALLVERSIONS
    POPARTNER
    Exceptions
    Function Group
    2012
    </b>
    regards
    prabhu
    Message was edited by: Prabhu Peram

  • SAP QUERY LOOPS AND INTERNAL TABLE

    Hi All, I have a query which i have made. It runs from Table EKPO which has PO details and what I want to do is now via ABAP Code pull through the total of goods receipt for the PO and Line Item into a field. Sounds Easy enough..Problem now,
    The table which contains the GR data is EKBE which agains a PO and Line Item can have many 101 movements and 102 movements so what I want is an ABAP Statent to basically sum up the total of 101 for the PO & LINE ITEMS and then minus this from the total of 102 for the PO & LINE ITEMS and post the result in to this new field I have created.
    I am pretty decent with ABAP Code in Querys I.e Select statements etc but from what I can see i need to create an internal table and do a loop and collect statement but I keep on failing due to not enough knowledge. Please can some one help me with this and provide me with the code and explanation as i would like to understand,
    POINTS WILL BE REWARDED
    Thanks
    Kind Regards
    Adeel Sarwar

    Hi,
    This is the full code i have entered but its not working. Any help would be appreciated. If you could rectify the code and internal tables that would be great.
    Thanks
    TABLES: EKBE.
    DATA: PurO LIKE EKPO-EBELN,
          POLI LIKE EKPO-EBELP.
    *New Table and Vars defined
    DATA:   BEGIN OF IT_EKBE,
              IT_EKBE LIKE EKBE,
            END OF IT_EKBE.
    DATA:  BEGIN OF IT_SUM OCCURS 0,
              EBELN TYPE EBELN,
              EBELP TYPE EBELP,
              DMBTR TYPE DMBTR,
              MENGE TYPE MENGE,
          END OF IT_SUM.
    CLEAR: QTYD.
    MOVE: EKPO-EBELN TO PurO,
          EKPO-EBELP TO POLI.
    SELECT * FROM EKBE INTO IT_EKBE
        WHERE EBELN = PurO
        AND   EBELP = POLI
        AND   BEWTP = 'E'
    LOOP AT IT_EKBE.
      MOVE CORRESPOING IT_EKBE TO IT_SUM.
      IF IT_EKBE-BWART = '102'.
        IT_SUM-DMBTR = IT_SUM-DMBTR * -1.
        IT_SUM-MENGE = IT_SUM-MENGE * -1.
      ENIDF.
      COLLECT IT_SUM.
      CLEAR IT_SUM.
    ENDLOOP.
    ENDSELECT.
    If sy-subrc = 0.
      QTYD = IT_SUM.
    ELSE.
      QTYD = 0.
    ENDIF.

  • Sections and Automatic Table of Contents

    Hello there,
    I have encountered a problem with InDesign while trying to make an automatic TOC for a journal I am working on. It has something to do with Sections.
    So I have a book with several indd files. The very first indd file is called "frontpages" which includes the cover page, the inside cover, the leaf page, then two facing pages for the table of contents, then followed by the last page (Letter from the Editor).
    This is followed by my second indd file called "newsbeats" which has 5 pages.
    Now, what I wanted to do was mark the frontpages' page numbers using lower case roman numerals (i, ii, iii, etc.) Of course I do not want to include the cover page and the inside cover, so I want to start (i) with the leaf page, (ii) and (iii) marks the table of contents pages, and finally (iv) for the Editor's Letter. I did this by starting a section at the leaf page, START PAGE NUMBERNG AT: 1, and selecting the (i, ii, iii...) option. It worked, while leavingboth the cover page and and the inside cover with markers (a) and (b), which I previously set (because I do not know how to just get rid of the page number...)
    Now, my book pages actually starts page (1) in the first page of the "newsbeats" document. So to do that, I made another section on the newsbeats first page, START PAGE NUMBERING AT: 1, and selected (1, 2, 3...). That works, too.
    So at the moment, I have 3 sections to my page numbers: 1) the lower case letters (a, b) section for the cover page and inside cover, respectively; 2) the roman numerals section (i, ii, iii...) for the leafpage, table of contents, and Editor Letter; and lastly, 3) the arabic letters (1, 2, 3...) starting from the newsbeats document all the way to the last page of the last indd of my book.
    Hope it is clear enough. Now here comes the problem:
    When I try to create an automatic Table of Contents, it correctly finds my chapters (i used the style for each journal article's title) and lists them. The problem is, the page numbers do not appear correctly. Instead, it shows "a" on EACH AND EVERY table of contents ENTRY. I tried some trial and error, and found out that the pages shown (a bunch of "a") is actually the COVER PAGE'S (a) page number....
    I tried for hours searching on topics like "Sections and Automatic TOC" or "How to selectively choose which pages to include in a TOC".. But I found none.
    Your help would be really appreciated. I can easily manually type the page numbers (my journal only has 9 articles), but I wanted to do this in a very systematic way, so it would be a lot easier if in any case, there are more than 9 articles for next year's publication.
    THANK YOU SO MUCH!
    - Larry
    PS: I attached a file showing the generated table of contents (on the ii-iii page that I wanted it to appear on), with the frontpages page panel, which shows the page numbering as well. Also I included the Book Panel, so you see how the pages are set up.

    We definitely experienced the same issue. Seems like pages just get lost. This is the approach I took:
    On the groups main page we just added a link called 'View All Articles' wrapped in a h1 tag (to make it very big and visible to the user) using the following url:
    search/?q=%20
    The resulting page will search for every article with a space in it and return the results. Still not the most elegant solution but it works. I'd like to implement it right into the xsl file but haven't found very much documentation to aid in this.

  • Export - Import In ABAP ( for variables and internal table)

    how can we pass value for the variable and internal table using Export and Import?
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    data: var type sy-uzeit.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var,sy-subrc,sy-uzeit.
    i found var value 0 while importing. 
    what is the right syntax for passing value of variable and internaltable.
    regards,
    dushyant.

    Hi,
    There are two possible solutions.
    Solution1:
    Program1.Should be run before atleast once so that TIME should be filled.
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    Program2.IF the TIME is filled,then only it will produce the result.
    data: var type sy-uzeit.
    clear var.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var, sy-subrc, sy-uzeit.
    Solution2:
    Single program:
    data: var type sy-uzeit.
    var = sy-uzeit.
    EXPORT var TO MEMORY ID 'TIME'.
    clear var.
    IMPORT var FROM MEMORY ID 'TIME'.
    write:/ var, sy-subrc, sy-uzeit.
    Kindly reward points by clikcing the star on the left of reply,if it helps.

  • Profit center data in GLPCA and GLPCT Tables

    Hi ,
    We are in ECC 6 but with classic GL and profit center accounting.
    We have the documents posted with profit center as the assignment object. When i check the documents through document display i could see the profit center in it , but when i execute the transactions 5KEZ , 2KEE i couldnt fetch any data. Actually there is no data updated to the GLPCA and GLPCT Tables.
    These accounts are Frieght clearing , benefits clearing etc...
    My actual requirement is the ending balances of GL accounts by profit center subtotals/  GL Trial balance by profit center .
    Thank you.

    Hi
    If these are balance sheet accounts, you need to run period end transactions like 1KE* to transfer the balances to EC-PCA
    also, Tick the "Online Transfer" in EC-PCA config & Specify these accounts in 3KEH and then post a transaction.... It should get transferred online on real time basis
    Br, Ajay M

Maybe you are looking for

  • Is it common to keep a music library on an external drive now?

    I did this about five years ago, when it was relatively new, and stopped because it was sometimes wonky. Given the popularity of the MacBook Air I'm thinking that perhaps the wrinkles might be ironed out? I have a very large music library (200 gig) a

  • Using Sequence in FORALL Statement

    I'm using a package to get the nextval in a sequence object. Basically, it is a function that returns select user_seq.nextval from dual. But I get 'column not allowed here' error. PL/SQL: ORA-00984: column not allowed here OPEN users_ins ;           

  • BI Scheduler Issue

    We have BI servers (Primary & Secondary) and both are on cluster (Load Balancing). Users are able to connect both servers without any issues. Primary server Scheduler is Active and Secondary scheduler one is Inactive all the time. If I make Secondary

  • Alerts broken after redeploy of Cloud Service?

    I've been experimenting with Alerts since they were released into Preview and really like the idea. I have however had enough trouble with them that I am losing faith in their effectiveness. It seems like each time I go into the Alerts section of the

  • Current Issue : Unable to reduce cache time out of Xcelsius cache server

    Hi, As per our current Business requirement,We need to refresh the KPI's of dashboards every 20 secs(Real time), But there islimitation using the cache mehthod as we are not able to control the cache timeout of the xcelsius cache server. In ourcurren