Any  API's to write metadata to database?

I have this unique situation wherein I need to extract the DDL objects (procedures, packages, functions) from database. I was able to solve this by using the DBMS_METADATA.GetDDL() API.
Now, I have another situation wherein I need to write the changes to DDL objects to database. I dont want to use TOAD to compile. The objective is to use an API to write any changes to DDL objects.
I could not find any API's that would do the work similar to GetDDL to write back, but could not find any.
Any help is appreciated.
Thanks
Ash

Let me explain the situation. In our current setup we use TOAD to create/change our custom objects (Package, Procedure and Function). When a change is ready to be migrated to production we copy the source code from development and paste it into production Toad editor and execute the object to save in database.
We want to get away from that and deploy the changes using command line (this idea is to use subversion to capture chanes and deploy through a script).
Now to implement this solution, broke it down to two components
1) Extract all custom objects' metadata and store in folders. For this I used DBMS_METADATA.getDDL() API and passed the parameter like 'PACKAGE', 'PROCEDURE' etc. I wrote a java program and ran that program from windows command line and I'm able to see all the objects retrived, saved into corresponding file names and into their respective folders.
2)Deploy. As as developer, I would get the source code for an object from one of the folders in my windows environment and then make the necessary modifications. Lets say for eg. test1.prc is a procedure that I extracted from database and I modified the code. Now I'm ready to deploy this code change in production.
Two ways of deploying in production 1) Use Toad to execute the changes and 2) Use SQLPlus to compile the changes.
I'm looking at a 3rd alternative wherein if there is any API's similar to GetDDL() (I wish they had SetDDL), I would write a JAVA wrapper to send the changes back to database.
If we do not have one, the only way I can think of is to call sqlplus through program and then compile the object. (This would be my alternative solution).
Hope I explained it clearly.
Thanks
Ash

Similar Messages

  • Any java API to get the metadata for a deployed bpel process in soa/bpm11g?

    Hi,
    Just wonder if this is possible, that there is some existing java api to retrieve the metadata (containing activities, isSynchrous, version information etc) for a deployed bpel process? If not, is there any other way to achieve this goal ( or example query database table directly)?
    I can only find this link for soa11g java api. I am able to invoke the bpel process using this API(direct binding) from java client.
    http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e10659/index.html?overview-tree.html
    Any help will be greatly appreciated.
    Thanks,
    Bin

    What I find so far:
    1. can get some process property values from ComponentInstance class, see its api.
    http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e10659/oracle/soa/management/facade/ComponentInstance.html
    Please refer http://blogs.oracle.com/soabpm/2009/07/soa_suite_11g_api_tricks_part.html of how to get the component instance (contain the bpel process) from a composite object
    2. in dev_soainfra database schema, the CUBE_INSTANCE table contains most of soa bpel components information. The BPM_CUBE_PROCESS table seems to contain only the process defined in a bpm application. This sounds a little confused, if we want to develop a bpel application, should it be put in soa or bpm application and what is the difference?
    Please feel free to correct any mistakes here.

  • Is there any API for pushing update patch related data from qualysguard in SCCM database?

    The problem is that, my company want to integrate SCCM with qualysguard. The qualys will scan for missing patches and will generate a patch report. From this patch report
    required data will be extracted and pushed into the SCCM database.I have sorted out the issue of extracting data from qualys, but I am stuck up at the point of pushing patch data in update repository. I tried searching for some API which could push data in
    database of SCCM, but unable to find. I thought of making my own script to run the sql query but this will ultimately screw the database of SCCM since there may be 60-70 table dependency.
    Please suggest any SCCM API which can help me to push data in its database(particularly in the tables interacting with the update repository). 

    We had looked into doing something similar, and this post is the closest we found.
    https://community.qualys.com/thread/11816
    Basically you will need a middle-man between Qualys and ConfigMgr to house the data. This may be a new database, or a whole seperate platform. I expect this could easily be done with SQL and SSRS.
    Also note, database edits to the ConfigMgr database are
    not supported  by Microsoft, I would recommend using a central system to pull data from Qualys and ConfigMgr without modifying either.
    Daniel Ratliff | http://www.PotentEngineer.com

  • Can no longer write metadata to Raw files

    My team an d I are working in Photoshop CS5, we write metadata to jpegs and Raw files- saving to Raw database not as sidecar .xmp files.  We are no longer able to save the metadata to Raw files.  There is no error message, the data writes like normal and the user who writes the metadata can see it but when you open in another computer the metadata fields are empty, however the jpegs do have the new metadata.  Was there a setting change at some point? This has been going on for a couple months and I cannot find an answer anywhere.  Again, we batch write to RAW+Jpeg, the jpeg files save the metadata but the Raw files look like the data saved but is not visible on any computer other thant he one that added the new metadata. This is occurring on multiple computers, both Mac and PC.

    TimVasilovic wrote:
    I understand the process you are describing,  In the past I have been able to embed metadata to a Raw file, move it to a server, pick that file up on another computer and see the metadata without need of the .xmp sidecar.  Is the ability to embed no longer supported  by Photoshop? Since this issue began we have taken to doing all our metadata editing in PhotoMechanic, which embeds without creating a sidecar.  If Photoshop is pushing people to create sidecar .xmp files only for writing metadata to Raw files I will probabaly move fully to PhotoMechanic because using sidecars has proven tricky in the past with how our files get moved around.
    If using Adobe Camera Raw on camera raw file you either have an XMP sidecar file created or the data is stored in a database.  Which happens it is your option in Edit/camera raw.  If you use ACR and edit a jpeg it does not create a distinct XMP file but it is also not directly written to the image either.  One can still delete the edits in Bridge with Edit/develp settings.
    If you use a DNG the metadata is written to the image.  Not sure what process Photo Mechanic uses.
    It appears to be a permission problem as other than CS6 Bridge is now 64 bit and has a new cache method there are no changes in how it handles metadata.

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • Write Back to Database

    Hi,
    Is there any component available in market which I can use to write back to database? I looked into web service option but there is not clear (Step by step) instruction on how to use it. If some body used any workaround or component please let me know.
    Thanks In Advance
    Kumar

    For the web services, it is good to create three methods:
    deleteComment
    getComment
    insertComment
    Example:
    public void handleComment(String Dashboard_Cd, String Metric_Cd,
                   Integer Comment_Id, String Network_Id, String comment, String User_id) {
              long begin = System.currentTimeMillis();
              SimpleDateFormat sf = new SimpleDateFormat("MMyyyy");
              java.util.Date d = new java.util.Date();
              String fiscalPeriod = sf.format(d);
              DatabaseManager dbm = new DatabaseManager();
    //          try {
                   if (Comment_Id.equals(0)) {
                        try {
                             if (dbm.init(DBConfig.getInstance().getConfig(),
                                       DatabaseManager.SECURITY_LEVEL.LEVEL_0)) {
                                  StringBuffer sql = new StringBuffer();
                                  sql.append("" + " INSERT INTO "
                                                      + Environment.getInstance().getDbPrefix()
                                                      + "_UTIL.TUM_DBD_COMMENTARY "
                                                      + "SELECT '" + Dashboard_Cd + "', '" + Metric_Cd +"', "
                                                      + "TMP.MAXID+SUM(1) OVER (ROWS UNBOUNDED PRECEDING) "
                                                      + ",'" + User_id + "' "
                                                      + ", " + fiscalPeriod
                                                      + ", Current_Date "
                                                      + ",'" + comment + "', USER,CAST(CURRENT_TIMESTAMP AS TIMESTAMP(6)) "
                                                      + ", 'Y'"
                                                      + "FROM (SELECT COALESCE(MAX(Comment_Id),0) AS MAXID "
                                                      + "FROM  "
                                                      + Environment.getInstance()
                                                                .getDbPrefix()
                                                      + "_UTIL.TUM_DBD_COMMENTARY) TMP");
    //                              String sqlstr = sql.toString();
                                  log.info(sql.toString());
                                  JPreparedStatement jp = new JPreparedStatement(sql
                                            .toString());
                                  jp.setBeanClass(Commentary.class);     
                                  try {
                                       DBResult rs = dbm.doQuery(jp, DBConfig
                                                 .getInstance().getContext(Network_Id));
                                       if (!rs.getStatus()) {
                                            for (SQLException e : rs.exceptionInfo()) {
                                                 log.error("", e);
                                  } catch (Exception e) {
                                       log.error("", e);
                        }finally {                         dbm.cleanUp();
    Webservice Connection:
    From the Manage Connections, select the add and select Web Service Connection.
    You will want to import the url for your ear file, where you will want to select the method you created. 
    I do not have a blog or forum at this time, though I will see if I can create a walkthrough video within the week.

  • Is there any API for performing Assembly Completion with LPN

    Hi,
    Is there any API available for performing Assembly Completion with LPN..??
    Thanks and Regards,
    Ramnish.

    We had looked into doing something similar, and this post is the closest we found.
    https://community.qualys.com/thread/11816
    Basically you will need a middle-man between Qualys and ConfigMgr to house the data. This may be a new database, or a whole seperate platform. I expect this could easily be done with SQL and SSRS.
    Also note, database edits to the ConfigMgr database are
    not supported  by Microsoft, I would recommend using a central system to pull data from Qualys and ConfigMgr without modifying either.
    Daniel Ratliff | http://www.PotentEngineer.com

  • Is there any API available to read drm files in ios devices?

    Hi,
    I am ios developer and trying to read DRM files using my app.
    I want to know is there any API available for the same.
    Thanks in Advance.

    Hi Prathap,
    No, there is no Java API specific for RDF functionality available. However, Java client applications can use JDBC to access the RDF store. A partially relevant post is at How do you query Oracle RDF database using Java program? . The JDBC documentation will have detailed documentation on using JDBC.
    Code snippets for one way of accessing SDO_RDF_MATCH through Java is below:
    <..........>
    sbStmt.append("select * from TABLE( ")
    .append(" SDO_RDF_MATCH('(?S ?P ?O)',")
    .append(" SDO_RDF_Models('")
    .append( <model_name> )
    .append("'),")
    .append("null,null,null))")
    .append(" where rownum <= ")
    .append(iNumRows)
    ResultSet rs = stmt.executeQuery(sbStmt.toString());
    while (rs.next()) {
    System.out.print("\n");
    System.out.print(rs.getString("S"));
    System.out.print(" (");
    System.out.print(rs.getString("S$RDFVTYP"));
    System.out.print(") ");
    System.out.print(", ");
    System.out.print(rs.getString("P"));
    System.out.print(" (");
    System.out.print(rs.getString("P$RDFVTYP"));
    System.out.print(") ");
    System.out.print(", ");
    System.out.print(rs.getString("O"));
    System.out.print(" (");
    System.out.print(rs.getString("O$RDFVTYP"));
    System.out.print(") ");
    <............>
    <...... handling CLOB values that are returned ....>
    // read CLOB if applicable
    Reader reader = null;
    try {
    CLOB clob = ((OracleResultSet) rs).getCLOB("O$RDFCLOB");
    if (clob == null) {
    System.out.print("not a long literal ");
    else {
    reader = clob.getCharacterStream();
    char[] buffer = new char[1024];
    // reading 1K at a time (just a demo)
    int iLength = reader.read(buffer);
    for (int i = 0; i < iLength; i++) {
    System.out.print(buffer);
    System.out.print(" ...");
    finally {
    if (reader != null) reader.close();
    <..........>

  • Bridge won't write metadata; Photoshop will

    I have some TIFFs created from scans. Here's the problem I'm seeing.
    When opening the image in PS CS5 I often get a message saying This file contains file info data which cannot be read and which has been ignored.
    First, I try to change some metadata in Bridge and save the image. Sometimes this solves the problem. Often, though, I get a message saying There was an error writing metadata to "filename".
    Next, I open the image in PS simply clicking OK on the error message above. I open File Info, change some metadata and save the image. So far this has always solved the problem thus far.
    The source of hte problem will likely not be easy to discover since the error message provides no useful information and since it is happening frequently, but not universally. At first I thought it was related to specific fields, but further testing seems to indicate it is not. Then I thought since another running application gave me a low storage message, it might be related to that. That does not seem to be the case either.
    However, even if I don't find the root cause, it would be helpful if Bridge would simply rewrite the metadata to fix the problem rather than having to go into PS.
    Any thoughts?
    Dale

    From
    what you are saying the TIFF's are corrupt somehow.  Open them in PS
    and then save with another name and see it you can then write metadata.
    For whatever reason Bridge may be more sensitive to the error than PS. 
    Can you change the scan to another format to eliminate the problem?
    Granted the metadata is somehow corrupt. However opening them in PS and then saving them with the same name solves the problem, so it must rebuild the corrupt metadata fields.
    I've scanned several hundred TIFFs and only had the problems on a few files. The only archival formats Nikon Scan 4 gives me are TIFF and nef and if I used nef ACR won't recognize the colorspace.
    Dale

  • Dirty Writes to the database

    I have a value object which was originally populated with information from database. I have a requirement to update ONLY the data that was changed in the value object. I could come up with something clever to do it myself, or I could use an existing pattern and Java API to accomplish this task. I prefer using an existing API.
    If there are any APIs out there that solve this problem, please let me know. Thanks in advance!

    I don't think that's the question here.
    I think the question is...
    Let's say four fields. One field gets updated.
    Only the one updated field gets updated to the
    database not a generic update for all four.Two situations.....
    1. I have a section of the application that updates
    ONLY that field. If so then I only update that
    field. There never can be a conflict. Consequently
    there is no need to discuss update strategies.
    2. I have a section of the application that updates
    that field and one other. Some other part of the
    application can update that other field. Given
    nothing more than that the two fields can be updated,
    there is no way to determine which one is correct.
    So you either find another rule or just use 'last
    t one wins' (which means discussing strategies for
    updates are meaningless.)I don't think we're talking about conflict resolution strategies. Well you are but I don't think the OP was. I think the OP is just talking about speeding up the serialization of an objects state to a database by not updating fields that haven't been modified.
    I don't disagree with what you are saying per se but I just don't see update conflict resolution listed as an issue by the OP.

  • Is there any APIs for FND attachments

    Hi,
    Is there any API for FND Attachments. I would like to load the concurrent output generated as an attachment to the record.
    So I'm planning to write PLSQL to read the file and load in to fnd_lobs. Later I would like this attachment available thru UI.
    Looking for Public or Private APIs to achieve above task.
    Thanks,
    Satya

    Dear,
    I found this very Information Site...
    URL http://www.pvmehta.com/myscript.html
    Hope for future you will get benefit from it...
    Regards,
    Kamran J. Chaudhry.

  • Free Report Writer for Oracle Database except Crystal Report and Data Vison

    Dear All,
    I want Free Report Writer for Oracle Database except Crystal Report and Data Vision software.
    Wr are working on Linux and windows both platform.
    Any one have direct link or website address for it ?
    Regards,
    Vipul Patel
    Ahmedabad

    Please check the following link -
    http://www.google.co.in/search?client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&channel=s&hl=en&q=open+source+report+writer+for+oracle&meta=&btnG=Google+Search
    Regards.
    Satyaki De.

  • Error deploying data tier application to SQL Azure at "Registering metadata for database" step ... should I care?

    I'd like to move an on-premesis SQL Server Database to SQL Azure. I've used SQL Mgmt Studio to Extract Data Tier Application and save my db as a dacpac file. Now I'm connected to my Azure server and I've chosen to Deploy Data Tier Application. I select my
    dacpac and the deploy starts but then on the last step "Registering metadata for database" it times out. I've tried it a couple of times and each time the deployed database is there and appears to be fully populated, but I'm not sure if I can ignore
    that error and continue. What is supposed to happen in that step, and should I expect it to fail when deploying to SQL Azure? 
    I'm following the steps here http://msdn.microsoft.com/en-us/library/hh694043.aspx in the Using Migration Tools > Data-tier Application DAC Package section, except that to deploy there's no SQL Mgmt Studio > Object Explorer [server]
    > Management >"Data Tier Applications" node, so I'm deploying by right-clicking on the server name and choosing "Deploy Data-tier Application". 
    My (total) guess here is that it's deployed the database fine and it's doing whatever magic happens when you register a data tier application, except that it's not working for SQL Azure. 
    I'm running against a server created for the new Azure service tiers, not against a Web/Business edition server. 
    The full details of error I get are below. 
    thanks, 
    Rory
    TITLE: Microsoft SQL Server Management Studio
    Could not deploy package.
    Warning SQL0: A project which specifies SQL Server 2008 as the target platform may experience compatibility issues with SQL Azure.
     (Microsoft.SqlServer.Dac)
    ADDITIONAL INFORMATION:
    Unable to register data-tier application: Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft.Data.Tools.Schema.Sql)
    Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft.Data.Tools.Schema.Sql)
    Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server, Error: -2)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=11.00.9213&EvtSrc=MSSQLServer&EvtID=-2&LinkId=20476
    The wait operation timed out
    BUTTONS:
    OK

    Hello,
    The registration process creates a DAC definition that defines the objects in the database, and register the DAC definition in the master system database in Windows Azure SQL Database.
    Based on the error message, there is timeout error when connect to SQL Database. Did you deploy a large database? When move large data to Azure SQL Database, it is recommended to use SQL Server Integration Services (SSIS) and BCP utility.
    Or you can try to create a client application with the Data-Tier Application Framework (DACFx) client tool to import database and handle connection loss it by re-establish the connection.
    Reference: http://sqldacexamples.codeplex.com/
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here. 
    Fanny Liu
    TechNet Community Support

  • Need to write object to database but can't find useful examples

    Hi,
    Within one servlet I need to write an Attributes object to a database to be retrieved sometime later by a second servlet. All the examples of serialization code that I can find are for writing to serial or file streams. What I need is to be able to be able to write the Attributes object to a MySQL database. How do I get it into a form that I can write to the database and how do I once again turn in back into an object? I've not been able to answer those question from the examples I've found so far.
    Any help would be greatly appreciated.
    -- Rob

    There are at least two choices.
    1. Serialize to a ByteArrayOutputStream, base-64-encode that data, and save as a string in the database.
    2. Serialize directly to a database blob.

  • IPhone]Is there any API like PostMessage of MS?postNotificationName notwork

    I want to send message or event without waiting for the thread to process the message or event.
    I have used NSDistributedNotificationCenter,NSNotificationQueue,NSNotificationCenter,but
    NSDistributedNotificationCenter and NSNotificationQueue don't work,while NSNotificationCenter will be waitfing for processing the message.
    Any one could tell me which API to user? Thanks in advance!
    code for NSDistributedNotificationCenter as followings:
                   [[NSDistributedNotificationCenter defaultCenter] addObserver:self
                selector:@selector(finishWorkHandler:) name: @"BAIDU_GOOGE_BEST"
                 object: nil];
                 printf("try to send message\n");
                   NSMutableDictionary* info = [NSMutableDictionary dictionary];
                   [info setObject: @"pig" forKey: @"Pig"];
                   [[NSDistributedNotificationCenter defaultCenter] postNotificationName: @"BAIDU_GOOGE_BEST" object:nil  userInfo: info deliverImmediately: YES];
    -(void)finishWorkHandler:(NSNotification*)note
        NSLog(@"NSNotificationCenter name: %@", [note name]);
        NSLog(@"NSNotificationCenter object: %@", [note object]);
        // unregister
       // [[NSNotificationCenter defaultCenter] removeObserver:self];
    Message was edited by: coolhome

    Hi,
    I am making an stand-alone application that needs to
    connect to a database, create the necessary tables
    (if they are not already present ) and then add,
    remove update data from this database.Oracle DBAs will not appreciate your app attempting to create databases/entities.
    >
    The problem is that our customers may be using many
    different DBMS such as Oracle, MySQL or MS SQL
    Server. I don�t want to make them use any specific
    database so the application should adapt to the DBMS
    installed in the customer's system.
    Then performance must not be your goal at least in terms of data processing. Because attempting to process data off box versus on box with a database will, in the vast majority of cases, be magnitudes of order slower.
    My question is, is there any API or framework or
    something that would make the process of using
    different JDBC drivers invisible or easier for me to
    develop? Any advice would be welcomed.
    Hibernate will probably make it easy as long as your don't use any pass through SQL (and there goes performance again.)
    All of the databases you named have either free or very low cost devoper editions which you can use for testing. (Actually I think they are all free.)
    I would also like to know if there is any way to find
    out if a SQL statement will work properly in a set of
    different DBMSs.
    Not without testing.
    It is almost guaranteed that schema creations will not (like creating a table.)

Maybe you are looking for

  • IPhone 4 Won't Charge. Tried Everything

    Hey Everyone, Well, I'm about to rip my hairs out of my freaking head. Sunday night at 8 o'clock. About to climb into bed and watch a little TV. Plug my iPhone in. Boom. Nothing. Try again for another 10 minutes until eventually it powers down due to

  • Error message from iTunes session ID unavailable

    When updating or synching will get the error message "iTunes unable to connect with this iPhone, session ID unavailable".  I click OK then the message goes away, but I have absolutely no idea what that means, if the updates and items synchronized, or

  • Itunes won't recognize my new 5s

    Just bought a new 5s.  Kept my old iphone (3s?), but got it a new number for my wife.  My new phone is not recognized by itunes.  My old phone is recognized.  I have all my aps etc. downloaded on my new phone from the cloud, but I can't get any music

  • Sumproduct in Numbers driving me mad!

    Hi All, looking for someone to tell me im being really silly here, this is driving me mad!!! So, I've been trying to impliment the following sumproduct in Numbers, here are my two tables, both on the same sheet. This is a test workbook ive been playi

  • IC Remove views "Note History" and "Last Interactions"

    Hello Everyone, I've got a problem within the IC(Business Role IC_AGENT). On the Account Identification view there are two screens, the first screen shows the Note History and the second shows Last Interactions. I really would like to remove them fro