Process a single trail file

Hi All,
I want to process a single trail file using a new replicat process. How do I do that? Is there any extra options I have to use in ADD REPLICAT command?

You could copy just that trail to a separate folder and then fully qualify that path in the ADD REPLICAT command . Replicat would then process
just that trail since there would no further trails in that directory/ folder
DBLOGIN USERID <id> PASSWORD <pw>
ADD REP <name> EXTTRAIL <complete path of the trail including the prefix>, EXTSEQNO <trail sequence number> EXTRBA 0
if not using checkpoint table
ADD REP <name> EXTTRAIL <complete path of the trail including the prefix>, EXTSEQNO <trail sequence number> EXTRBA 0, NODBCHECKPOINT
Thanks,
Raj

Similar Messages

  • How to estimate time required by one replicate to process 10 MB trai file

    Hi
    Consider one trail file of size 10MB. The trail file contains only insert statements. Size of one record is 2KB and total trail file size 10 MB. Under ideal configuration how much time the Replicate would take to read the trail file and apply transactions on target server
    Is there any formuale to estimate time required by Replicat process for a trail file ?
    Thanks

    There are so many variables I wouldn't venture to guess.
    I have implemented GG on multiple systems, each had their own performance characteristics depending on operating system, oracle version, makeup of the data being replicated, network latency, load on the source system, load on the target system, etc.
    My best suggestion is to set it up and try it out.
    In my experience once set up I have had to do very little GG specific tuning to get very good speed (sometimes tweaking BATCHSQL helps for some usage patterns). Many SLAs I have encountered are 15 minutes... I get spoiled sometimes when I report we are getting committed transactions captured transported and applied within just a few seconds. And then they always want their data in 10 seconds :)

  • SQLEXEC on each new trail file or just on replicat start to capture current FILESEQNO?

    Dear community,
    I need to capture fileseqno of the trail file that replicat is processing. But I only need to capture it once without overhead of adding fileseqno column to target tables.
    I clearly understand that using colmap ( usedefaults, FILESEQNO = @GETENV ("RECORD", "FILESEQNO") ) inserts fileseqno value for every record captured from logs, but what I'm trying to accomplish is to react on the event of changing fileseqno number while replicat is running.
    Following application logic I actually only need to capture fileseqno of the first trail, that is processed by replicat.
    What I've tried so far is using following parameters for replicat (given two tables to be replicated):
    table m.EventHead,
    sqlexec( id set_first_seq_no1, spname pk_replication_accounting.set_first_seq_no, params( a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@coltest( set_first_seq_no1.a_fileseqno, NULL ) or @coltest( set_first_seq_no1.a_fileseqno, MISSING ) or @coltest( set_first_seq_no1.a_fileseqno, INVALID ) ),
    eventactions( ignore record );
    table m.EventTail,
    sqlexec( id set_first_seq_no2, spname pk_replication_accounting.set_first_seq_no, params( a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@coltest( set_first_seq_no2.a_fileseqno, NULL ) or @coltest( set_first_seq_no2.a_fileseqno, MISSING ) or @coltest( set_first_seq_no2.a_fileseqno, INVALID ) ),
    eventactions( ignore record );
    pk_replication_accounting.set_first_seq_no is defined within package as
    procedure set_first_seq_no( a_fileseqno in out pls_integer );
    With filter clause I've tried to instruct GG to perform SQLEXEC only for the first record captured for every table, but with no success. Stored procedure is fired multiple times upon every record in trail file.
    As far as I understand standalone SQLEXEC is not capable of obtaining value of @GETENV( "GGFILEHEADER", "FILESEQNO" ), though I have not tried it yet.
    Another way, that I see, is to instruct extract to add one fake record for every new trail file and then to process it within map clause with SQLEXEC. For example if SOURCEISTABLE had per table effect, then we could get our single record for every trail using dual table:
    sourceistable
    table dual;
    Still I don't know how to achieve required behavior.
    Please help, if you know some workarounds.

    Managed to capture current fileseqno for every replicat start with following parameters:
    ignoreupdatebefores
    map m.EventHead, target gg.tmp_gg_dummy, handlecollisions, colmap ( id = @GETENV ("RECORD", "FILERBA") )
    sqlexec( id set_first_seq_no1, spname pk_replication_accounting.set_first_seq_no, params(a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@getenv( "STATS", "TABLE", "GG.TMP_GG_DUMMY", "DML" ) = 0), insertallrecords;
    map m.EventTail, target gg.tmp_gg_dummy, handlecollisions, colmap ( id = @GETENV ("RECORD", "FILERBA") )
    sqlexec( id set_first_seq_no2, spname pk_replication_accounting.set_first_seq_no, params(a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@getenv( "STATS", "TABLE", "GG.TMP_GG_DUMMY", "DML" ) = 0), insertallrecords;
    tmp_gg_dummy is defined as the following:
    create global temporary table gg.tmp_gg_dummy ( id number( 14, 0 ) ) on commit delete rows;
    alter table gg.tmp_gg_dummy add constraint tmp_gg_dummy_pk primary key ( id );
    Procedure is fired only once per every replicat start and report file shows the following:
    From Table m.EventHead to GG.TMP_GG_DUMMY:
      Stored procedure set_first_seq_no1:
             attempts:         0
           successful:         0
    From Table m.EventTail to GG.TMP_GG_DUMMY:
           #                   inserts:         0
           #                   updates:         1
           #                   deletes:         0
           #                  discards:         0
      Stored procedure set_first_seq_no2:
             attempts:         1
           successful:         1
    though original mapping from m.EventTail has
    inserts:   
    69
    updates:   
    21
    befores:   
    21
    deletes:    
    0
    discards:    
    0

  • When installing Server 2008 Datacenter from disc, I get the error message "Windows could not parse or process the unattent answer file for pass [specialize].

    This is a clean install from s disc onto a 3TB GPT drive. This is for testing. I'm not concerned with finding a more practical solution for this installation.
    There is no image or answer file involved. But at the "Completing Installation" phase, I receive this error message about an answer file.
    Has anyone else expecrienced this problem? The only results I find here are regarding actual imaging with answer files.
    EDIT: [Redacted]
    EDIT 2: Since this now happens with every single installation of Server 2008 on this system, I feel the need to keep this open and elaborate on my situation.
    I originally installed Server 2008 on the 2TB partition of this 3TB drive. The partition type was MBR. I reinstalled it (there was a false alarm for malicious software) since it was a fresh install without updates or anything else installed. And for the
    second time, it installed fine.
    I decided to make the most of my 3TB drive and try GPT partitioning. So I went into the command prompt > diskpart > ran "clean" on the disk, then "convert GPT"
    That is when I tried another install of Server 2008 and received this error message about an answer file.
    I decided to convert it back to MBR and get on with what I wanted to test in the first place. So I went into Diskpart again, ran "clean", rand "convert MBR" and started the install. But this resulted in the same error message about an
    answer file.
    I went back to Diskpart, "clean"ed the disk, ran "convert dynamic" to make sure it was dynamic, then tried the install again, with the same result.
    Now, I've tried installing on the disk as GPT, MBR basic and MBR dynamic. I've tried a different installation disc, as well. I got the same result. At this point, I'm going to switch hard drives, but I'm still open to input. Thanks for reading!

    Hi,
    As you swapped the hard disk, whether the same issue occurs again if you redo the same steps?
    I asked "OEM or retail" because of this known issue:
    "Windows could not parse or process the unattend answer file for pass [specialize]" error message when you perform an in-place upgrade in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2425962
    An answer file may already contain in the disc which causes the issue.
    If issue still exists you can click Shift + F10 during the installation process to see the log. Check if the installation error for detailed information.
    If you have any feedback on our support, please send to [email protected]

  • How can spwan out parallel HBR in one single batch file?

    We have several Hyperion business rules to be executed at night batch. To ease the control and management, we put all business rules under one single batch file. However how can we spawn out different business rules so that these rules can be executed in parallel?
    We are using Windows Server 2003, Scheduled Tasks with Essbase ver 11.1
    Thanks!

    CL wrote:
    R1 and R2 can in parallel and but must finish before R3 and R4?
    Can you START /WAIT a batch that then spawns two more START sessions without a /WAIT?
    Do that twice and you will get rough parallelism.
    Failing that, why not just use an OS scheduler? Have the first processes write completion logs. Time the second so that the first should be done, but put in a loop that looks for FILEEXIST.
    Lots of ways to approach this. Scripting this stuff is always fun.
    Regards,
    Camerin LackpourThanks! Yes there are lots of ways to handle with batch scripts. Windows default Scheduled Tasks does not have dependency checking function and needs to write scripts to check something e.g. the log file.

  • Importing multiple amount columns from a single text file

    I'm sure this question has been addressed many times. I have tried to search for an answer here and other areas, but I have not been able to find a clear answer yet. I am relatively new to HFM and FDM and thus do not have a lot of experience to fall back on. I am primarily a Planning/Essbase person. That being said, here is my question:
    I have a data source (text file) containing two amount columns that I need to load to HFM via FDM. One amount column consists of Average Exchange Rates and the other amount column consists of Ending Exchange Rates. I have been asked to develop a process to load both columns of data to HFM using a single process (one Import Format). I've been told this is possible by writing an Import DataPump script. It seems that I would need to create a temporary record set based on the original source file and modify it so that it contained a duplicate set of records where the first set would be used for the Average Rate and the second set would be used for the Ending Rate. This would be a piece of cake using SQL against a relational source, but that's obviously not the case here. I do have some experience with writing FDM scripts but from an IF... Then... Else... standpoint based on metadata values.
    If there is anyone out there that has time to help me with this, it would be most appreciated.
    Thanks,

    This is relatively easy to achieve with a single import script associated with the Account source field (assuming AverageRate and EndRate are accounts in your application) in your import format.
    Essentially your first amount say AverageRate would be set as the default field for Amount and these values would be loaded as if it were a single value file. For the second value, EndRate you would have to insert the second value directly into the FDM work table which is the temporary table populated when data is imported from a file during the import process. The example code snippet below suld gve you guidance on how this is done
    'Get name of temp import work table
    strWorkTableName = RES.PstrWorkTable
    'Create temp table trial balance recordset
    Set rsAppend = DW.DataAccess.farsTable(strWorkTableName)
    If IsNumeric(EndRateFieldValue Ref Goes Here) Then
              If EndRateFieldValue Ref Goes Here <> 0 Then
                   ' Create a new record, and supply it with its field values
                   rsAppend.AddNew
                   rsAppend.Fields("DataView") = "YTD"
                   rsAppend.Fields("PartitionKey") = RES.PlngLocKey
                   rsAppend.Fields("CatKey") = RES.PlngCatKey
                   rsAppend.Fields("PeriodKey") = RES.PdtePerKey
                   rsAppend.Fields("CalcAcctType") = 9
                   rsAppend.Fields("Account") = "EndRate"
                   rsAppend.Fields("Amount") = EndRateFieldValue Ref
                   rsAppend.Fields("Entity")=DW.Utilities.fParseString(strRecord, 16, 1, ",")
                   rsAppend.Fields("UD1") = DW.Utilities.fParseString(strRecord, 16, 2, ",")
                   rsAppend.Fields("UD2") = DW.Utilities.fParseString(strRecord, 16, 3, ",")
                   rsAppend.Fields("UD3") = DW.Utilities.fParseString(strRecord, 16, 16, ",")
                   'Append the record to the collection
                   rsAppend.Update
              End If
    End If
    'Close recordset
    rsAppend.close
    In addition the return value of this Import Script should be "AverageRate" i.e. name of ht eaccount associated with the first value field. The NZP expression also needs to be put on the Amount field in the import format to ensure that the EndRate Field value is always processed even if the value of AverageRate is zero.

  • Can I use a select and update statement in a single jsp file?

    I want to update the BUY table everytime I would add a SELL transaction.....I want to minus the stocks that I sold to those that Ive bought before.....
    note: I used a seperate table in BUY and SELL transaction
    After I Have added a transaction, I want to update the buy table. This is my problem, can I used both SELECT and UPDATE statement at the same time in a single jsp file for example like this:
    select * from test, test1;
    update test
    set total_shares=total_shares-Stotal;
    where stock_code=Scode AND name_broker=Sbroker;
    Can i have both of these statements in the same jsp file in oder to update the buy table?
    Or can anyone suggest how can process that update?THANKS!
    --------------------

    Can i have both of these statements in the same jsp file in oder to update the buy table?Yes. But wouldn't it have been easier just to try it?

  • How to loop through single XML File and send multiple RFC calls?

    I am looking for the best approach to use for making multiple RFC calls (can be sequential) using a single XML file of data.  I have been attempting to get a BPM loop working, but to no avail.  My RFC only accepts a single set of delivery input and I have been told to try to work with it as is.
    input xml sample:
    <?xml version="1.0" encoding="UTF-8"?>
    <ProofOfDelivery>
       <POD>
          <delivery_number>1</delivery_number>
          <carrier_name>UPS</carrier_name>
       </POD>
       <POD>
          <delivery_number>2</delivery_number>
          <carrier_name>UPS</carrier_name>
       </POD>
    </ProofOfDelivery>
    I need to make a synchronous RFC call for each set of POD data.
    Thanks in advance!

    Thanks for the inputs.
    I tried with a BPM and multi-mapping transformation before a ForEach block.  I am getting this error:
    Work item 000000028028: Object FLOWITEM method EXECUTE cannot be executed
    Error during result processing of work item 000000028029
    com/sap/xi/tf/_ProofOfDeliveryMultiMapping_com.sap.aii.utilxi.misc.api.BaseRuntimeExceptionRuntim
    Error: Exception CX_MERGE_SPLIT occurred (program: CL_MERGE_SPLIT_SERVICE========CP, include: CL_
    Probably because I am not making/using the container objects properly.  Here is a screenshot of my BPM.  Can anyone spot my issue or point me to an example on this sort of container use?
    [http://kdwendel.brinkster.net/images/bpm.jpg|http://kdwendel.brinkster.net/images/bpm.jpg]
    Thanks

  • How to handle different message types of EDIFACT in single input file.

    Hi All,
    Currently we have a requirement like we will be receiving the input with different messages in the same input file (eg: ORDERS and ORDRSP in the same file).
    We have configured both the message types in the document definitions. One more thing is the versions of the messages are different.
    When we pass the input we are getting the error.
    The input looks like below,
    UNB
    UNG
    UNH*ORDERS
    UNT
    UNE
    UNG
    UNH*ORDRSP
    UNT
    UNE
    UNZ
    Please assist us to overcome the error.
    Thanks,
    Ravindra.

    Hi Prasanna,
    Thanks for your reply.
    I have created the agreements for both the messages still i'm getting the below error,
    B2B-50037
    B2B inbound message processing error
    If i give the input as separate messages (i mean 2 input files). Its working fine.
    Also let me know is there any configuration settings to handle multiple messages in a single input file.
    Thanks,
    Ravindra.

  • Can I use Visual Basic to covert form user data from multiple .pdf files to a single .csv file?

    Can I use Visual Basic to covert form user data from multiple .pdf files to a single .csv file?  If so, how?

    You can automate Acrobat using IAC (InterApplication Communications), as documented in the Acrobat SDK. Your program could loop through a collection of PDFs, load them in Acrobat, extract the form data from each, and generate a CSV file that contains the data.
    Acrobat can also do this with its "Merge Data Files into Spreadsheet" function, but this is a manual process.

  • Submit quiz results to one single .csv file

    How can I submit quiz results (over 200 people will be taking
    my captivate quiz) to a single .csv file?
    Right now, the quizes are submitted to my email address and
    attached to the email as a POSTDATA.ATT file. I have to manually go
    into my Outlook and save attachment as "FnameLname.csv”. So
    each quiz taker will have an individual .csv file! So I will have
    over 100 emails and over 100 .csv files!!!!!
    How can I make the quiz results submit to a single
    Quiz_Results.csv file on my web server instead?

    The way I would do this is to submit the scores into a
    database. In between Captivate and the database you'll need
    middleware (.asp, asp.net, ColdFusion, etc.). This middleware
    receives your data from Captivate and processes it - submitting it
    into the database. You can then write another middleware page that
    produces a report (web page table, or exports .csv file) with the
    data stored the database.
    Another possibility is to use Captivate's built-in SCORM
    functionality and submit user scores into an LMS, then run reports
    and export .csv files from your LMS.
    Sorry - I don't think this functionality is built into
    Captivate to join multiple records into one .csv file.

  • How to set a trail file as completed?

    I am using GoldenGate to do an initial load, using parameters like:
    EXTFILE /tmp/aa, MAXFILES 1000, MEGABYTES 2000
    I don't understand why all the files except the last one are marked as completed.
    In my use case, I need all the generated files to be completed.
    What is the reason for the initial load extract not to do that?
    That seems like a bug to me?!
    What options / tools do I have to either:
    - configure the initial load extract to complete the last file;
    - complete the last file by hand after the initial load extract process exits.
    According to the "Logdump Reference for Oracle GoldenGate," it seems like it's only a matter of setting the FileSize token to the file's size, in the FileHeader record.
    I could develop a tool that does that, but I would rather find a better way.
    Thanks in advance!

    A trail file's "completed" status has a specific meaning, which is independent from the file's size. This status is indicated by setting the FileSize record (type 0x39) in the FileHeader record with flag 0x00 and the file's actual size as its value. I have verified this experimentally.
    You are right that a trail file is marked as completed by the trail's writer once the file reaches the configured limit.
    In the case of an initial load, the last file is not marked as completed. The last file is typically below the configured size limit, but no additional data is expected to be written into that file. Therefore, I don't see any reason for the last file in an initial load not to be marked as completed?!
    I wrote a small script that marks a trail file as completed, by setting the FileSize record to the file's actual size, and by clearing the flag on that record.
    When I run this script on the last file of an initial load, this file is considered as completed by any reader, as expected.
    I was hoping not to have to write my own script to mark trail files as completed. Is there a standard way to make an initial load extract mark the last file as completed?

  • How do I Deploy a single HTML file that is part of a project without redeploying the project?

    We have an HTNL file that we load dynamically into our web page.  This file contains ToolTips and by making it an external file to the build it allows us to modify the tool-tips as the processes the tooltips describe change (government processes that
    are being re-designed under the PPACA).
    Since the file is integral to our project, it is checked into our VS Online repository under source code control and gets automatically redeployed when the whole package gets redeployed.   And this works fine
    But if we check out the file, make some edits and try to deploy ONLY this file, we get a prompt for a password.   And I have no idea what password it wants.
    What is the correct way to do this?

    I am the project owner and have not only admin rights but full admin rights to create admin rights  - and I am being prompted for a password.  so that's not the issue..
    And the link you sent me to suggests you did not read what I had posted:
    I have no problems doing a full build and a full deployment.  That works fine.
     I DO NOT WANT TO DO A FULL DEPLOYMENT.
    I want to make a single change to a single resource file that we have under source control since it is a critical component of the solution.   And I want to publish ONLY THAT FILE. 
    So I
    make the change in VS 2013 to the file (to a project that is already linked to Azure services and which we have done publishing from before)
    Check in the file
    Right cliick on the file
    Choose "Publish XXXX.html"
    And I get a prompt to lot onto the FTP server
    ftp://waws-prod-blu-005.azurewebstes.windows.net/site/wwwroot
    HUH!!! I don't have any service or server remotely with that URL anywhere my system.   see attached image below

  • Single Raw file conversion to .dng

    Sorry if this has been asked but I can't find any tutorial on .cr2 file to .dng file conversion.  I usually shoot RAW+jpeg on my Canon 50d.   I am using CS3 and understand the need to convert my Canon Raw to the Adobe Dng format for PS processing. I downloaded and installed the free dng converter.  I just can't seem to find a way to convert a SINGLE raw file into a SINGLE .dng file, it only wants a whole folder full for conversion.  And am I correct in that Adobe isn't writing a plugin to let CS3 do the conversion?  Thanks and Happy Thanksgiving!

    You could copy the single CR2 file into a folder all by itself, then run the DNG converter on it.  And I'm not completely sure, but I think I've heard of the ability for the tool to convert a lone file by providing the file path on a command line.  Thus you could drag a single file to the DNG Converter icon if you had it on the desktop.
    And you're correct - you're not going to get new camera support in the Camera Raw 4.x series that works with Photoshop CS3.  Adobe stops adding new camera support when a new major version of Photoshop is released.  But they do provide the (less convenient) DNG Converter for free.
    -Noel

  • Zip: concatening compressed chunks, will it give a single zip file

    Are concatenated zipped chunks recognized as a single zip file?
    What I am trying to achieve(using java.util.zip) is like, my application is generating a large CSV file, i need to compress this data and write it into a file or send to network. But I dont want to wait until the complete data generation and need to compress data in chunks; assuming that if i append my compressed chunks to a file at last i will get a valid & complete zip file.
    For testing this I made a sample program like:
    //Here a.zip b.zip are files compressed using java.util.zip; here they act as
    // data chunk (actually these will be genarted by my
    // application dynamically & in parallel to this process)
    FileInputStream f1 = new FileInputStream(new File("c://a.zip"));
    FileInputStream f2 = new FileInputStream(new File("c://b.zip"));
    // Opening file in append mode
    FileOutputStream fos = new FileOutputStream("c://c.zip",true);
    byte[] data = new byte[(int)new File("c://a.zip").length()];
    f1.read(data);
    fos.write(data);
    byte[] data2 = new byte[(int)new File("c://b.zip").length()];
    f2.read(data2);
    fos.write(data2);
    f1.close();
    f2.close();
    fos.close();
    But this program does not work, as m not getting the final zip file containing the whole data, rather it gives data only from first file.
    Can any one point out the error.
    Or suggest me the better way to achieve this.

    hi
    i want to sugest you like:--
    say fileoutputstreem.flesh();
    call this flesh() method . It is just like commit in DB.
    Through this U may salve your problem.
    You are creating zip files like a.zip and b.zip if you save in your c:\\ it will take some IO Operations. It is not sugested.
    If you want to like that only i will sujest you to delete them after your process is completed.

Maybe you are looking for

  • My volume is not working when on the ringer what do I do

    My iPhone volume is not working when I have the volume set on. It will work with the headphones but other then that nothing. What's going on with it?

  • Does iCloud back up all of my GarageBand tracks?

    When I back up or sync my iPad to my computer, does it back up all of my GarageBand tracks as well? I have about 200+ tracks right now still need some final editing. I would hate to lose them all I usually sync my iPad with my computer I just don't k

  • SmcFanControl 2.1- good or bad ?

    Hello all First of all , I'll not make the claim to be a " Know it all " when it comes down to computer temperatures, Thus the reason for my posting. I have noticed that my IMac always seems to run a bit on the hot side, so I tried googling normal te

  • BUS4101:- Next dynamic approver get method.

    Hi All, I have a question. We are having a weared problem. We have srm 5.0 workflow which workd thorough a customised matrix. We found situations where approver determination was failing. We finally found out the situation when approval fails. Its in

  • Smartview 11.1.2.1 invalid session on Hyperion Distributed enviroment

    Good day We installed HFM Hyperion 11.1.12.1 in a distributed enviroment with 2 web servers , 2 app servers and 2 Reporting servers Everything is working well ,but we encountered the problem on smartview 11.1.2.1 if we use web1 we get the this error