Simple database integration

Hello!
I'm new to Dreamweaver 8 as of today. Been struggling along
with FrontPage for way too long!
What I need to do is include a 10,000 line Excel database
(auto parts) into our web site that a only few special clients can
search, but NOT SEE the entire DB. I'd like to have a search box
where they can enter a part number or possibly a simple
description, and then a results box showing only a yes/no
availability and price and perhaps the description. Actually,
availability is the only result necessary. They will not be
ordering directly from the site. We're just trying to cut down on
phone availability checks.
Can DW8 do this?
Or?
Thanks very much!
Greg in New Orleans
Geaux Saints!!!
http://www.adobe.com/cfusion/webforums/forum/viewprivatemessage.cfm
0 message(s)

No, I don't have any of that knowledge, nor do I have the
desire or time to learn it.
I'm not being flippant here, it just isn't worth spending too
much time or money.
Is there s a service that can accomplish what we need for a
small fee?
Like I said, this is only going to be viewed by a couple/few
clients, so we are not looking to spend thousands on a MySQL type
solution. Also, once these parts are sold, they will not be
replaced, so the inventory will actually shrink over time.
Thanks.
G

Similar Messages

  • Looking for a simple database app.

    I have an Excel sheet that i use to store simple database data. Like a flat file database with only 4 columns, no relations, no calculations, just data storage.
    I don't want to download and install over a 100Mb of OpenOffice or NeoOffice just to be able to see my Excel sheet so i was thinking at converting it to a simple cardfiler-like database.
    Guess what? There are no simple databases for the Mac! There is OpenOffice and NeoOffice and there is FileMaker. But no simple, small, configurable card-filer. I checked VersionTracker, IUseThis, but i cannot find anything.
    That makes me think that it already needs to be available on the standard installed apps (please say that i'm right) and i just don't see it.
    Any suggestions? Any help? Am i really missing the point here?
    Thanks all!
    Ton.

    Haven't looked at Wallet. Looked at iData, but that is payware now. I'm trying now woth OmniOutliner which seems to look ok.
    Ton.

  • Check database integrity throws 665 error when executing check database integrity task in SSMS.

    I have read all other cases that relate to this error and cannot get this to work. Running SQL Server 2012 sp1 on Windows server 2012 R2. Disk space and permissions are fine, but I get the error below when I try and use the check database integrity task
    within my maintenance plan on both system and user databases. I have researched this and fragmentation is not the issue. I'm lost at this point and would appreciate at least some steps to try. databases are not "read only" as I have read this may
    contribute to the problem. All other maintenance tasks run fine.
    Error message from SQL LOG
    Check Database integrity on Local server connection
    Databases: All system databases
    Task start: 2014-01-13T11:00:04.
    Task end: 2014-01-13T11:00:04.
    Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'master', NOINDEX)
    " failed with the following error: "A database snapshot cannot be created because it failed to start.
    A database snapshot cannot be created because it failed to start.
    MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand the physical file 'E:\\SQLdata\\MSSQL11.MSSQLSERVER\\MSSQL\\DATA\\master.mdf:MSSQL_DBCC9'.
    The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
    The database could not be exclusively locked to perform the operation.
    Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous
    errors for more details.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
    Error Message from Log File Viewer in SSMS:
    Source: Check Database Integrity Task      Executing query "USE [ReportServer]  ".: 50% complete  End Progress  Error: 2014-01-13 11:31:54.92     Code: 0xC002F210    
    Source: Check Database Integrity Task Execute SQL Task     Description: Executing the query "DBCC CHECKDB(N'ReportServer')  WITH NO_INFOMSGS  " failed with the following error: "A database snapshot cannot be created
    because it failed to start.  A database snapshot cannot be created because it failed to start.  MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to
    expand the physical file 'E:\SQLdata\MSSQL11.MSSQLSERVER\MSSQL\DATA\ReportServer.mdf:MSSQL_DBCC9'.  The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not
    support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.  The database could not be exclusively locked to perform the operation.  Check statement aborted. The database could not be checked as a database
    snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.". Possible failure reasons: Problems with
    the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  End Error  Progress: 2014-01-13 11:31:54.93     Source: Check Database Integrity Task     
    Executing query "USE [ReportServerTempDB]  ".: 50% complete  End Progress  Error: 2014-01-13 11:31:55.02     Code: 0xC002F210     Source: Check Database Integrity Task Execute SQL Task    
    Description: Executing the query "DBCC CHECKDB(N'ReportServerTempDB')  WITH NO_INFOM..." failed with the following error: "A database snapshot cannot be created because it failed to start.  A database snapshot cannot be created because
    it failed to start.  MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand the physical file 'E:\SQLdata\MSSQL11.MSSQLSERVER\MSSQL\DATA\ReportServerTempDB.mdf:MSSQL_DBCC9'. 
    The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.".
    Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  End Error  Progress: 2014-01-13 11:31:55.02     Source:
    Check Database Integrity Task      Executing query "USE [AddressUpload]  ".: 50% complete  End Progress  Error: 2014-01-13 11:31:55.13     Code: 0xC002F210     Source:
    Check Database Integrity Task Execute SQL Task     Description: Executing the query "DBCC CHECKDB(N'AddressUpload')  WITH NO_INFOMSGS  " failed with the following error: "A database snapshot cannot be created because
    it failed to start.  A database snapshot cannot be created because it failed to start.  MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand
    the physical file 'E:\SQLData\MSSQL11.MSSQLSERVER\MSSQL\DATA\database1.mdf:MSSQL_DBCC9'.  The database snapshot for online checks could not be created. Either th...  The package execution fa...  The step failed.

    ReFS is NOT supported in use with SQL Server 2012. Once such item, which you've stumbled upon is the fact that alternate streams and sparse files are not implemented in ReFS and thus these issues are caused. You *could* force the checkdb to execute by using
    WITH TABLOCKX but that'll require exclusive access to the database for the duration of the checkdb scan and that's not something I would advise to do.
    Sean Gallardy | Blog |
    Twitter

  • Very simple database require

    Hi
    I'm looking for a very simple database solution. I have some very large .csv files that I need to query against before importing to Excel. Filemaker, etc. is over the top for what I need. Any ideas?
    Thanks.
    PowerPC G5   Mac OS X (10.4.6)  

    Welcome to Apple Discussions!
    You could use a script that converts CSV to Appleworks:
    http://www.tandb.com.au/appleworks/import/
    And then export from Appleworks to Excel.
    I know the PowerMac G5 doesn't come with Appleworks, but it is a quarter of the price of Filemaker Pro.
    Maybe the authors of the script could help you.

  • A very simple database system with JSON

    If we need to store some data in a database, but without the need of advanced SQL features, can we use this scheme (written here in Javascript / node.js) :
    // the DB will be in RAM !
    var myDb = {};
    // read DB from disk if file exists
    try { myDb = JSON.parse(fs.readFileSync(DBFILENAME)); } catch(e) { }
    // serialize to disk every minute or when process terminates
    function serialize() { fs.writeFile('./myDb.json', JSON.stringify(myDb)); }
    setInterval(serialize, 60 * 1000);
    process.on('SIGTERM', serialize); process.on('SIGINT', serialize);
    myDb['record1'] = 'foo';
    myDb['record2'] = 'bar';
    See
    the longer version here as a gist (8 lines of code).
    1) Does this DB practice have a name? Is it really so bad? Is it possible to use such a
    10-lines-of-code DB system, even in production of websites that have a < 1 GB database ?
    2) Scalability: until which size would this system work without performance problems?
    i.e. would it work until 2GB of data on a a normal Linux server with 4GB RAM? Or would there be real performance problems?
    Note: a minute seems enough to write a 2GB data to disk... Of course I admit it is 100% non-optimized, we could add
    diff feature between n-1th and nth writing to disk...
    3) Search: can I use ready-to-use tools to do some search in such a "simple" database? Lucene, ElasticSearch, Sphinx, etc. something else?

    Nothing is wrong with this for development. If it has a name, I suppose it would be a mock database. It is not uncommon to create a mock database that can emulate very basic functionality. You have the added advantage that you start from a scratch
    database each and everytime, thus you know that your program would work also for a potential empty nosql database for the same reason.
    However this is not a reasonable permanent solution by any means.
    Most programmers, due to the small overhead, will simply go ahead and make it work with a nosql database. It may take slightly longer, you are also programming directly to work in production and not being forced to adapt your program and test it beforehand.
    Scalability is a non-issue because you're always working in development. If you crash your own computer, it is not that big of a deal. The limit of such a database would be only that of your RAM (or the RAM of the computer running the server), however I
    think you'd find that you'll find that the program gets very slow before you even reach the point when your program will crash.
    Perhaps you could adapt some searching mechanism for the mock database, but if you're going to go through the trouble, just go ahead and use a proper nosql database. If you literally lose more than 1 hour working on this mock database, then you've wasted
    time.

  • ITunes database integrity check?

    In iTunes I have a few ! that have appeared in the first column indicating iTunes can't find the file. So far I have found three folders (albums) that are missing from my music library disc and I don't understand how or when they dissappeared. I haven't found any individual missing files yet, just missing whole folders. It appears iTunes doesn't update the ! indicator until it has some reason to actually go open the file. Is there a way to automate this? So far I've been looking at each song with Command-I to check the Where info under Summary, or selecting the first song of an album and using Command-R to view the songs in finder. This is going to take a long time with nearly 8000 songs in my library. I'm trying to get an handle on the extent of the problem. I am careful to only use iTunes to manage the library (I don't move files around with finder). My library is on an external Firewire drive. Ideally, I would like there to be an "iTunes database integrity check" command.

    The MSDN documentation says "RESTORE VERIFYONLY" command does not verify whether the structure of the data contained within the backup set is correct. Does it mean the restore command will not able to detect corruption in the database and I just need to
    restore each of the backs starting from the latest to see if integrity check fails after restore ? OR RESTORE VERIFYONLY will confirm if the database is un-corrupted ?
    As the documentation suggests, RESTORE VERIFYONLY checks the structure of the backup but not the database itself.  You'll need to restore the backup to check the database consistency.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • SIMPLE Database Design Problem !

    Mapping is a big problem for many complex applications.
    So what happens if we put all the tables into one table called ENTITY?
    I have more than 300 attributeTypes.And there will be lots of null values in the records of that single table as every entityType uses the same table.
    Other than wasting space if I put a clustered index on my entityType coloumn in that table.What kind of performance penalties to I get?
    Definition of the table
    ENTITY
    EntityID > uniqueidentifier
    EntityType > Tells the entityTypeName
    Name >
    LastName >
    CompanyName > 300 attributeTypes
    OppurtunityPeriod >
    PS:There is also another table called RELATION that points the relations between entities.

    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    check the coloumn with WHERE _entityType='PERSON'
    as there is is clustered index on entityType...there
    is NO performance decrease.
    there is also a clustered index on RELATION table on
    relationType
    when we say WHERE _entityType ='PERSON' or
    WHERE relationType='CONTACTMECHANISM'.
    it scans the clustered index first.it acts like a
    table as it is physically ordered.I was thinking in terms of using several conditions in the same select, such as
    WHERE _entityType ='PERSON'
      AND LastName LIKE 'A%' In your case you have to use at least two indices, and since your clustered index comes first ...
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Have you ever thought of using constraints in your
    modell? How would you realize those?
    ...in fact we did.We have arranged the generic object
    model in an object database.The knowledge information
    is held in the object database.So your relational database is used only as a "simple" storage, everything has go through your object database.
    But the data schema is held in the RDBMS with code
    generation that creates a schema to hold data.If you think that this approach makes sense, why not.
    But in able to have a efficent mapping and a good
    performance we have thought about building only one
    table.The problem is we know we are losing some space
    but the thing is harddisk is much cheaper than RAM
    and CPU.So our trade off concerated on the storage
    cost.But I still wonder if there is a point that I
    have missed in terms performance?Just test your approach by using sufficiently data - only you know how many records you have to store in your modell.
    PS: it is not wise effective using generic object
    models also in object databases as CPU cost is a lot
    when u are holding the data.I don't know if I'd have taken your approach - using two database systems to hold data and business logic.
    PS2: RDBMS is a value based system where object
    databases are identity based.we are trying to be in
    the gray area of both worlds.Like I wrote: if your approach works and scales to the required size, why not? I would assume that you did a load test with your approach.
    What I would question though is that your discussing a "SIMPLE Database Design" problem. I don't see anything simple in your approach when it comes to implementation.
    C.

  • Database Integrity Constraints

    Hi All
    Is there any DI API component or any other solution to handle the Database Integrity Constraints (Relationships): between parent table and child table.  when I delete a record from parent table referenced by some records on a child table, I want to denied this action
    can I get a solution ?
    best regards
    Med

    Hi Juha
    I develop an Add On, for this case, I must create my ouwn user tables:
    I have a User MasterData Table_1 (MD_Table1),
    and a User MasterData Table_2 (MD_Table_2)which referenced to the first one,
    (MD_Table_2.LinkedTable = "MD_Table_1")
    On these Tables, I have create UDOs
    UDO.CanDelete = SAPbobsCOM.BoYesNoEnum.tYES
    UDO.ObjectType = SAPbobsCOM.BoUDOObjType.boud_MasterData
    UDO.TableName="MD_Table_1"
    UDO.Add()
    UDO.CanDelete = SAPbobsCOM.BoYesNoEnum.tYES
    UDO.ObjectType = SAPbobsCOM.BoUDOObjType.boud_MasterData
    UDO.TableName="MD_Table_2"
    UDO.Add()
    The problem is:
    When I delete a record (rec_1) from the first Table (MD_Table_1)which is referenced to a record (rec_2) of the second (MD_Table_2), the action is accomplished successfully. in spite of rec_1 is referenced to rec_2 .
    regards

  • Database Integrity check failed, how to find an un-corrupted backup for recovery

    I got database integrity check task that runs weekly. The job ran March 23rd but failed on March 30th. We have identified that there is a corruption in database and now the task is to restore it from backup (with data loss). We have database backup running
    every-night and I need to know how can I find which is the latest backup that's not corrupted.
    The MSDN documentation says "RESTORE VERIFYONLY" command does not verify whether the structure of the data contained within the backup set is correct. Does it mean the restore command will not able to detect corruption in the database and I just
    need to restore each of the backs starting from the latest to see if integrity check fails after restore ? OR RESTORE VERIFYONLY will confirm if the database is un-corrupted ?

    The MSDN documentation says "RESTORE VERIFYONLY" command does not verify whether the structure of the data contained within the backup set is correct. Does it mean the restore command will not able to detect corruption in the database and I just need to
    restore each of the backs starting from the latest to see if integrity check fails after restore ? OR RESTORE VERIFYONLY will confirm if the database is un-corrupted ?
    As the documentation suggests, RESTORE VERIFYONLY checks the structure of the backup but not the database itself.  You'll need to restore the backup to check the database consistency.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • CCX7.0 Database Integration with Oracle 10g through ODBC

    Good Day…
    We are up to integrate CCX7.0 with Oracle 10g, CCX7.0 documents show the way to integrate the CCX7.0 with SQL server, but mention nothing about Oracle 10g, anyway...
    I tried to check the ODBC options, there were many options, I tried to select any Oracle related choice, just to find out Oracle should supply the driver for it. In the forums, people say it’s an Oracle Client, but what will this client do? Am I going to connect though it to the oracle database server or is it going to query the database to my CCX server? What am I going to use in this case, user DSN or system DSN?
    If there any tips or recommendations I will be grateful, as it is my first database integration.
    Thanks
    AT

    Hi Aaron
    Thanks Aaron for the information, and yes you are right, I am not a database person.
    I have here some questions, sorry if some of them sounds silly
    1)       Is there a certain Oracle Client version? or and Oracle Client will work for me?
    2)       Do you recommend system DSN on user DSN?
    3)       “…. available to any device on the system” as you said in your reply, what do you mean by "device" and "system"? Sorry, but this will help to clear the picture to me
    4)       What will this Oracle Client exactly do? Is it going to build an SQL table on my CCX server that query the information from the Oracle Server?
    5)       My customer has already implemented database integration with IPCC3.5, how can I check the current configuration of the Oracle Client, and what things else should I check before proceeding with the integration?
    The reason I want to understand this because there is a lot of database integration is requested from us, the CCX scripts can be used to query data from SQL tables, I read a lot about this –waiting for the implementation phase- the lucky me, the first integration I faces is with Oracle.
    Anyway, who said learning is easy.
    Thanks Aaron for your time, waiting your answers.
    AT

  • Simple database server

    For school i need to develop an application.
    Can anyone help me with a simple database server? I want to read data from my database.
    Database: a simple employers table
    I want to request for example all employers firstname.
    How can i build the server?
    How can i make the connection with the database?
    Are there tutorials or examples?
    Please help.
    Thanks in advance.

    I'd say Hypersonic SQL (HSQLDB) is your simplest bet; easy to setup and no software installation required because it is a pure java database. All you need to know is on their website:
    http://hsqldb.org/

  • Simple database app.

    I know there have been loads of questions about simple database apps, but I wanted to ask if anyone could recommend something specific:
    Every month I receive an Excel spreadsheet of data (a membership list) for my organisation. It is basically a list of members' contact details and a few other bits of data. Up until now I have been using FileMaker 8.5 but the problems with Leopard are the last straw; it seems a vastly overpriced solution for what I actually need.
    All I need is to be able to create reports of members, search the database and ocassionally print some address labels for mailouts. Obviously FileMaker is capable of doing this, but it's too expensive and I use only the basic features. It seems crazy for a small office like mine to have to pay over the odds for this; I know FileMaker is capable of much more and hence the cost, but I was looking for an alternative.
    Is there anything that would be suited to my needs?
    Thanks in advance!

    Hi Harry !
    if your data simply contacts and a few bit more, what about AddressBook.app ? I know, that it is not much confortable to import/export but for Contacts is good (&Inclusive).
    I don't know when the OpenOffice developers will make a bigger update for Leo, perhaps the current version will work too. Try it: http://porting.openoffice.org/mac/download/index.html (135MB)
    Ciao
    Massimo

  • UCCX 9 Outbound IVR ANI Database Integration

    Hi,
    I am using UCCX 9 premium with outbound IVR license to achieve a database integration with TTS, to play a dynamically customized message to customer.
    I realized that the outbound dialer contact info such as account number cannot be carried over to the IVR script, which is a shameful flaw of the system I believe. But it indicates that the ANI can be carried over. I am wondering what is the ANI? Is it the contact number of the customer, or just the CTI port number? If it is the contact number, I may still be able to query the SQL database using it.
    High appreciated.
    Quan           

    Do you know if this is still a limitation in UCCX 10.5 ?

  • Help with Designing a Simple Database

    I am currently working on a designing problem I would appreciate if someone could review my solution.
    The Problem:
    I need to create a simple database that contains the following entries�
    First Name //mandatory
    Last Name //mandatory
    Date of Birth //mandatory
    Hobbies //there could be anywhere from 0 to infinite amount of hobbies
    Type of actions that I need to perform on the database�
    Add, delete, and modify and entry
    Below are a two design solutions I came up with�
    For both solutions I am going to create two text files. One of the text files called profiles.txt will contain the following fields on each line�
    Id, First Name, Last Name, Date of Birth
    //the Id field in this text file will be the primary key so you will not see the Id duplicated
    The other text file called hobbies.txt will contain the following fields on each line�
    Id, hobby
    //the Id field can be duplicated in this text file so a person can be linked to zero or several hobbies
    Now what differs between my solutions is how I am going to read this data into my program�
    Solution 1) When you start the program it will read the profiles.txt into a linked list. After that is finished the program will then load the hobbies into several linked list that the profiles linked list will point to. So basically each person will have a linked list of hobbies associated with him or her.
    Problem I see with this solution is that if there were 200 million people contained in the profiles.txt would my program crash since the computer would not have enough memory to load all of those names?
    Solution 2) Instead of loading the data at the start of the program the data will stay in the text files. So when someone does a search it will open the text file and search for the entry.
    Problem with this solution is it would be hard to delete and modify names (would I have to rewrite the text file every time I do a change?). Would a good fix to this problem be creating a separate text file to keep track of any changes or deletions I do and once in a while do a database maintenance?
    So a review of my questions is�
    1)     Would my program crash if I had 200 million entries if I use my solution 1?
    2)     Is my solution 2 possible without being incredibly slow or complicated?
    3)     Is there another way of doing this I have not thought of?

    I think having one option will do. Now the problem with this text file thing is that, we'll hve to read every information into memory if we are running a test driver for the program and then work on the information in memory.
    After the program closes, whatever changes we made to this data in memory shd be written to file so we need to find a way of writing the data from memory to overwrite the file. I hope you kinda get what i'm talking abt.
    the database will consist of information like this
    String firstName
    String lastName
    String DOB
    ArrayList / Vector Hobbies
    Now, we kinda want to declare a class with with all these information as data fields ok.
    so let's say
    public class Try{
    String firstName
    String lastName
    String DOB
    ArrayList / Vector Hobbies
    and then create an instance of this class in the driver
    which will be an ArrayList of this class or something so each index of this class ArrayList will hve it's unique data information from the file we read in but again, this is kinda working in memory right.
    After doing all we have to do, we want to write back to file all the changes we made to the data in memory. That's where we are kinda stuck right now.
    A member of the group was suggesting we call whatever functions to work on the txt file which will mean we'll hve to re-write each time we call a function to operate on it and all that stuff. This is a slow process.
    will be glad if anybody out there will have a better way to implement this. Thanks a lot.

  • Oracle database integration with SAP PI for high volume & Complex Structure

    Hi
    We have requirement for integrating oracle database to SAP PI 7.0 for sending data which is eventually transferred to multiple receivers. The involved data structure is hugely complex (around 18 child tables) with high volume processing requirement (100K+ objects need to be processed in 6-7 hours). We need to implement logic for prioritizing the object i.e. high priority objects must be processed first and then objects with normal priority.
    We could think of implementing this kind of logic in database procedures (at least it provides flexibility for implementing data selection logic as well as processed data can be marked as success in the same SP) but since PI sender adapter doesn't support calling Oracle stored procedures currently so this option is rules out. we can try implementing complex data selection using oracle table function but table function doesn't allow any SQL query which changes data (UPDATE, INSERT, DELETE etc) so it is impossible to mark selected objects in table function from PI communication channel "Update Query" option.
    Also, we need to make sure that we are not processing all the objects at once as message size for 20 objects can vary from 100 KB to 15 MB which could really lead to serious performance issues for bigger messages.
    Please share any implementation experience for handling issues:
    1 - Database Integration involving Oracle at sender side
    2 - Complex Data structures
    3 - High Volume Processing
    4 - Controlled data selection from database to contro the message size in PI
    Thanks,
    Panchdev

    Hi,
          We can call the stored procedure using receiver adapter using ccBPM, we can follow different approaches for reading the data in this case.
    a) In this  a ccBPM instance needs to be triggered using some dummy message, after receiving this message the ccBPM can make  a sync call to the Oracle database the store procedure(this can be done using the specific receiver data type strucure), on getting the response message the ccBPM  can then proceed with the further steps.The stored procedure needs to be optimized for improving the performance as the mapping complexity will largely get affected by the structure in which the stored procedure returns the message.Prioritization of the objects can be handled in the stored procedure.
    b) In this a ccBPM instance can first read data from the header level table, then it can make subsequent sync calls to Oracle tables for reading data from the child tables.This approach is less suitable for this interface as the number child tables is big.
    Pravesh.

Maybe you are looking for