Hi please need suggestion for best design for jdbc-rfc-file

Hi Gurus,
actually our scenario is jdbc->XI( <-RFC->)->File
Here our payload is around 5000 records
can it be advisable to use rfc synchronous communication.
The scenario will be exected only in night times.so can we schedule the adapter as we are using sp9.
If not what must be the good design approach.
And also after scheduling to a perticular period ,if there is any down time of XI server,will the process start immediately after the sever up or it will again watch for that perticular time.

Hi,
>>>can it be advisable to use rfc synchronous communication.
- No, if there is no business requirement for realtime response.
>>>can we schedule the adapter as we are using sp9
- Not necessary.
>>>Here our payload is around 5000 records
- Is it requirement to send all 5000 records at once?? if not then distribute load whole day if possible.
If you provide more information then maybe we can assist you.
Regards,
Gourav
<i>Reward points if it helps you</i>

Similar Messages

  • Best Design : for a SOAP -XI - BAPI ( Multiple )

    Hi all,
    Need your suggestion on best design to implement a SOAP -XI - BAPI ( Multiple calls ) scenario.
    NON SAP Application will make a SOAP service call to XI. This service need to call 2 BAPI's ( 1st call to get some reqd params for second BAPI call ). Also secon call need a commit.
    Whats the best design :
    1. ABAP server proxy
    2. BPM ( to handle multiple calls )
    Thanks and cheers,
    Anisha.

    <b>>>>>>( 1st call to get some reqd params for second BAPI call ). Also secon call need a commit.</b>
    <i><u>1. ABAP server proxy</u></i>
    Here u can have explicit coding and have better control over the process interms u will call the first bapi and the result of it will be passed to exporting parameter of the second bapi as required.
    <u><i>2. BPM ( to handle multiple calls )</i></u>
    performance hinders as pointed out.

  • Suggest the Best Practice for Procurement of Commodities like crude, copper

    Dear Gurus,
    Please suggest the best practice for following business Process.
    My client is having the procurement need for a comodity whose prices are fluctuating. Say Crude Oil . The prices are changing every day. Now The client would like to pay to vendor on day of good receipt. But it may be possible that the price on GR day is much higher. How to control these kind of procurement. Presently I have activated the price variance through invoice posting but this is not working.
    Can you suggest best practice for  procurement of comodities.
    Thank you for your consistant support.
    Regards
    Vinod Kakade
    Edited by: vinodkakade on Jul 14, 2011 2:34 PM

    Hi Vinod,
    Would you know the price by the time the GR is to happen, in this case you can ask the vendor to send the confirmations, just prior to that with the correct price.
    Please refer this link
    Change in PO Price after goods receipts and goods issue
    though the thread is marked unanswered.
    Regards
    Shailesh

  • What is the best free app for graphic design for mac

    what is the best free app for graphic design for mac

    Good place to look for software:
    http://www.macupdate.com/
    And for free alternatives to some popular software packages:
    http://alternativeto.net/
    (If you see an ad there for something called MacKeeper, ignore it and on no account install it - it is malware.)
    And there is of course also the App Store!

  • I have been searching for best app for converting handwriting to text. Write pad seems great does the app allow direct entry into a template?

    I have been searching for best app for converting handwriting to text on my ipad. Writepad  by Phat seems great does the app allow direct entry into a template?

    Writepad only works within the app itself - it does not allow you to write in another program. While it does allow you to save a document as a text or pdf file, it won't (I believe) let you open a file created outside of Writepad and work on it (except maybe text files - would have to investigate and see).

  • Is Photoshop Elements 13 the right software for scarves design for biginner?

    Is Photoshop Elements 13 the right software for scarves design for beginner?

    <moved from Downloading, Installing, Setting Up to Photoshop Elements>

  • Need Recommendations on Best Configuration For My SW Needs Please!

    Ok,
    Besides all the "normal" stuff, I run Apple Aperture and Pixelmator (may one day move to PhotoShop). My finances limit me to either a 4 core @ 2.93Ghz or an 8 core @ 2.26Ghz. (Assume the RAM is the same at 6G.
    Which unit will deliver me the best performance for these applications? If you need more info, please just let me know.
    Thanks in advance!
    Mark

    Hi,
    Don't forget about places like this:http://www.powermax.com/parts/code/PMCNMP
    I saved $400 in taxes buying my Mac from them. And they are darn fine people to boot.
    Also keep an eye out for what Apple sells as refurbished. Not that much of a price break, but it is some. Plus, Applecare is almost essential for these machines and that still covers their used models.
    The 8 core will give you a little more speed and offers more options for growth, but I think the Hatters advice is very sound.

  • Is this the best design for asynchronous notifications (such as email)? Current design uses Web Site, Azure Service Bus Queue, Table Storage and Cloud Service Worker Role.

    I am asking for feedback on this design. Here is an example user story:
    As a group admin on the website I want to be notified when a user in my group uploads a file to the group.
    Easiest solution would be that in the code handling the upload, we just directly create an email message in there and send it. However, this seems like it isn't really the appropriate level of separation of concerns, so instead we are thinking to have a separate
    worker process which does nothing but send notifications. So, the website in the upload code handles receiving the file, extracting some metadata from it (like filename) and writing this to the database. As soon as it is done handling the file upload it then
    does two things: Writes the details of the notification to be sent (such as subject, filename, etc...) to a dedicated "notification" table and also creates a message in a queue which the notification sending worker process monitors. The entire sequence
    is shown in the diagram below.
    My questions are: Do you see any drawbacks in this design? Is there a better design? The team wants to use Azure Worker Roles, Queues and Table storage. Is it the right call to use these components or is this design unnecessarily complex? Quality attribute
    requirements are that it is easy to code, easy to maintain, easy to debug at runtime, auditable (history is available of when notifications were sent, etc...), monitor-able. Any other quality attributes you think we should be designing for?
    More info:
    We are creating a cloud application (in Azure) in which there are at least 2 components. The first is the "source" component (for example a UI / website) in which some action happens or some condition is met that triggers a second component or "worker"
    to perform some job. These jobs have details or metadata associated with them which we plan to store in Azure Table Storage. Here is the pattern we are considering:
    Steps:
    Condition for job met.
    Source writes job details to table.
    Source puts job in queue.
    Asynchronously:
    Worker accepts job from queue.
    Worker Records DateTimeStarted in table.
    Queue marks job marked as "in progress".
    Worker performs job.
    Worker updates table with details (including DateTimeCompleted).
    Worker reports completion to queue.
    Job deleted from queue.
    Please comment and let me know if I have this right, or if there is some better pattern. For example sake, consider the work to be "sending a notification" such as an email whose template fields are filled from the "details" mentioned in
    the pattern.

    Hi,
    Thanks for your posting.
    This development mode can exclude some errors, such as the file upload complete at the same time... from my experience, this is a good choice to achieve the goal.
    Best Regards,
    Jambor  
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Need advise on best settings for FCP 7!

    I am looking for some advice on the best settings for FCP 7 when I am syncing sound using pluraleyes, exporting into quicktime as one file and then importing that new file into FCP 7 for editing. I have noticed that sometimes my Quicktime files are very large (20 gb) and sometimes they are much smaller (130 MB), but I never changed the settings.
    I am working on FCP 7 and shot using my Panasonic GH3 at 24 fps (which comes to 23.98). My frame size is 1920x1080. Compressor is H.264. Data rate: 8.6 MB/sec. Audio rate: 48.0 KHz.  I have heard that FCP 7 does not properly deal with H.264. So, I am wondering, what should my settings be?  Should I set to "Apple ProRes 422 (HQ) 1920x1080 24p 48 kHz"?  Is there anything I need to know when I export to Quicktime 7 Pro in order to not reduce the quality of the video? Should I be changing my video files first before I even start to work on them?
    I haven't edited in almost ten years so I am desperate need of some guidance!
    Final Cut Pro 7, Mac OS X (10.6.8)

    Hi Shane Ross,
    Thanks for your help.  So, I already synced all of my interviews in FCP 7 (using pluraleyes) before I changed the video to ProRess 422.  Can I just go into FCP (in the same project) and compress the video footage, import the new video files and throw it into the timeline? Or will this cause issues? I tried this as a test and it seems to be okay but I just want to check that there aren't any issues that I am not considering.
    Also, when I do compress, it seems to take forever (sometimes two hours). Is there any way to make this move along faster?
    Thanks so much!

  • Best design for HA Fileshare on existing Hyper-V Cluster?

    Have a three node 2012 R2 Hyper-V Cluster. The storage is a HP MSA 2000 G3 SAS Block Storage with CSV's. 
    We have a fileserver for all users running as VM on the cluster. Fileserver availability is important and it's difficult to take this fileserver down for the monthly patching. So we want to make these file services HA. Nearly all clients are Windows 8.1,
    so SMB 3 can be used. 
    What is the best way to make these file services HA?
    1. The easiest way would probably be to migrate these fileserver ressources to a dedicated LUN on the MSA 2000, and to add a "general fileserver role" to the existing hyper-V cluster. But is it supported and a good solution to provide Hyper-V VM's
    and HA file services on the same cluster (even when the performance requirements for file services are not high)? Or does this configuration affect the Hyper-V VM performance too much?
    2. Is it better to create a two node guest cluster with "Shared VHDX" for the file services? I'm not sure if this would even work. Because we had "Persistent Reservation" warnings when creating the Hyper-V cluster with the MSA 2000. According "http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx",
    these warnings are normal with block storage and can be ignored when we never want to create Windows storage pools or storage spaces. But the Hyper-V MMC shows that "shared VHDX" work with "persistent reservations". 
    3. Are there other possibilities to provide HA file services with this configuration without buying new HW? (Remark: DFSR with two independet Fileservers is probably not a good solution, we have a lot of data that change frequently).
    Thank you in advance for any advice and recommedations!
    Franz

    Hi Franz,
    If you are not going to be using Storage Spaces in the Cluster, this is a warning that you can safely ignore. 
    It passes the normal SCSI3 Persistent Reservation tests, so you are good with those. Additional, when we use the cluster we can install the cluster CAU it will automatically install the cluster updates.
    The related KB:
    Requirements and Best Practices for Cluster-Aware Updating
    https://technet.microsoft.com/en-us/library/jj134234.aspx
    Cluster-Aware Updating: Frequently Asked Questions
    https://technet.microsoft.com/en-us/library/hh831367.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Design for a booking system

    I am trying to create a booking database...but the design I keep coming up with makes me a bit uneasy. I don't really like it (because of massive joins involved in determining what time slots are available) but I don't see any other way to do this. So this is my design.
    --This table stores incidences of booked time slots
    TABLE:Booking_Incidence
    BookingID
    BookingStartDate (DateTime)
    BookingEndDate (DateTime)
    FK_UserID
    --This Table stores the generally available booking times for each week.  Eg.  If users can book from 9:00am to 5:00pm monday to friday then there would be an entry with
    slotstarttime = 9:00, slotendtime=17:00,sun=false,mon=true,tue=true, wed=true, thur=true, fri=true, sat=false
    TABLE:General_Avail_Booking
    SlotStartTime (Time)
    SlotEndTime (Time)
    Sun (boolean)
    Mon (boolean)
    Tue (boolean)
    Wed (boolean)
    Thur (boolean)
    Fri (boolean)
    Sat (boolean)
    --This table stores any possible exceptions that might occur outside of the general booking times (eg a holiday).  bAvailable determines if the exception allows for a new booking, or cancels an existing one.
    Table: Booking_Exceptions
    SlotStartDate (DateTime)
    SlotEndDate (DateTime)
    bAvailable (boolean)
    I use General_Avail_Booking and Booking_Exceptions to determine the possible times when users can actually book a time slot, and then store what time slots users book in Booking_Incidence.

    You're thinking about the problem visually and so you have something that looks like a wall planner turned into SQL. Way too linear. A booked room is really an intersection between three dimensions - the room, the time it's booked for and the booking itself (who booked it, purpose, etc). If you track the bookings you can infer the times when the room is free.
    The following code is very much an "it's almost Easter" spike and not guaranteed not to have its own problems.
    SQL> CREATE TABLE booking
      2   (id number not null
      3     , booked_by varchar2(20) not null
      4     , booked_on date not null
      5     , CONSTRAINT book_pk PRIMARY KEY (id))
      6  /
    Table created.
    SQL> CREATE TABLE room
      2       (id number not null
      3        , location varchar2(10)
      4        , CONSTRAINT room_pk PRIMARY KEY (id))
      5  /
    Table created.
    SQL> CREATE TABLE slot
      2       (id number not null
      3        , slot_time varchar2(10)
      4        , CONSTRAINT slot_pk PRIMARY KEY (id))
      5  /
    Table created.
    SQL> CREATE TABLE booked_slot
      2   (room_id number not null
      3    , booked_date date not null
      4    , slot_id number not null
      5    , booking_id number not null
      6    , CONSTRAINT bslot_pk PRIMARY KEY (room_id, booked_date, slot_id)
      7    , CONSTRAINT bslot_book_fk FOREIGN KEY (booking_id)
      8       REFERENCES booking(id))
      9  /
    Table created.
    SQL> CREATE OR REPLACE TRIGGER bslot_bir BEFORE INSERT ON booked_slot
      2  FOR EACH ROW
      3  BEGIN
      4      :NEW.booked_date := trunc(:NEW.booked_date);
      5  END;
      6  /
    Trigger created.
    SQL>What I'm going to do here is have a little business rule that says you can't book a room for less than half an hour.
    SQL> INSERT INTO slot VALUES (1000, '10:00')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1030, '10:30')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1100, '11:00')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1130, '11:30')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1200, '12:00')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1230, '12:30')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1300, '13:00')
      2  /
    1 row created.
    SQL> INSERT INTO slot VALUES (1330, '13:30')
      2  /
    1 row created.
    SQL> INSERT INTO room VALUES (1, '3/A')
      2  /
    1 row created.
    SQL> INSERT INTO room VALUES (2, '3/B')
      2  /
    1 row created.
    SQL> INSERT INTO room VALUES (3, 'Rm 8')
      2  /
    1 row created.
    SQL>Set done, let's book room 3/B for an hour and a half....
    SQL> INSERT INTO booking VALUES (1, 'APC', sysdate)
    2 /
    1 row created.
    SQL> INSERT INTO booked_slot VALUES (2, sysdate + 3, 1130, 1)
    2 /
    1 row created.
    SQL> INSERT INTO booked_slot VALUES (2, sysdate + 3, 1200, 1)
    2 /
    1 row created.
    SQL> INSERT INTO booked_slot VALUES (2, sysdate + 3, 1230, 1)
    2 /
    1 row created.
    SQL>
    But now the Easter Rabbit needs to book a room for two hours on Sunday. Which rooms are free?
    SQL> SELECT room.location
      2  FROM   room
      3  WHERE  NOT EXISTS
      4         ( SELECT null
      5           FROM   booked_slot
      6           WHERE  room_id = room.id
      7           AND    booked_date = to_date('16-APR-2006')
      8           AND    slot_id BETWEEN 1000 AND 1200 )
      9  /
    LOCATION
    3/A
    Rm 8
    SQL>You'll notice the value of making the slot keys the same as the time they represent.
    Of course, producing a map of availability is not that easy, but it should not that hard either.
    Cheers, APC

  • Need advice on best setup for Extreme and Express w/ (n only) network

    I'd like to get some advice on the best setup for my situation. I've read a number of posts on WDS, Extending a Network, etc. and, unfortunately, I'm now more confused than ever.
    We have an Airport Extreme 802.11n using WPA2 Personal, 2.4Ghz (n only connection) which I've found to give us the best range/connection speeds for the following devices (all computers running 10.5.5, Apple TV's using most current update):
    (2) MacBooks
    iMac
    (2) AppleTVs
    The good news we have a large house, the bad news we have a large house. Meaning of course that I don't get the range in parts of the house I'd like to. I also have an older Mini (G4) connected to the AEBS thru ethernet (the Mini acts as the iTunes server for the ATVs).
    I just bought a new Airport Express with the desire to place it on the other side of the house to both enhance the range of the wireless network and to provide another wired to wireless connection to the network.
    I initially merely chose to "Extend a wireless network" but that seems to have a MAJOR adverse impact on the speeds of the wireless network. dropping the streaming to one of the ATV by like 90%. I would like to maintain the security settings I have as well as the 2.4Ghz (n only) since these provide the best speed/connection range on the AEBS.
    My question then is what is the best way to use the AX (WDS? Bridge?).

    The best way to use it is the option you chose "Extend a wireless network".
    WDS forces you to the much slower 802.11g and even cuts that bandwidth in half.
    Operating as a bridge has nothing to do with wirelessly extending a network. Changing this option won't have any effect on wireless bandwidth.

  • Need Suggestion on the Design of a New Workbench

    Hi All,
    I need a suggestion on the design of agreement workbench..
    The requirement goes this way...
    We will have workbench main screen, where header and line details will be entered manually ( or sourced from legacy system). On the main screen, there will be few buttons, clicking upon which will open the subforms (around 6-8 screens) or supporting details (the data can be entered or interfaced).
    We have two approaches.
    1. Keeping everyithing in a single .fmb file
    2. Creating one .fmb file for the main screen and different .fmb files for each of the individual screens and calling them from the main screen.
    Please suggest the best approach considering all the factors like maintanance, user friendlyness, switching b/w the main and child forms and all other possible factors which can make difference.
    Thanks in advance!.
    Thanks,
    Pavan

    Hello,
    All I can say is that small modules are faster to load and easyest to maintain.
    Francois

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Needs suggestion the best way database insertion using OSB

    Hi all, again and again I need your suggestion. Which is the best way for writing data to database using OSB in few tables as fastest as possible (speed consideration)?, should I make lots DB schema for supporting JCA adapter and transform each message and writing to database? OR should I call java callout then using EJB for writing to database?.

    Hi,
    We had the similar scenario in our project and this is my take on this.
    Its better to use a JCA DBAdapter to execute/invoke a stored procedure and then have the PL/SQL script for insertion into the Stored Procedure.
    As the OSB DBAdapter configuration is very tightly coupled to the DB structure any changes to the column types of the table will mean a regeneration of the adapter WSDL.
    In general even for other DB operations like select, delete, update... its is a good idea to use the stored procedure in conjunction with the DBAdapter to decouple the link to DB to some extent.
    Thanks,
    Patrick

Maybe you are looking for