Data Federator targetSchema. TableName hanging after deployment

Data Sources: DB2, AS400, Oracle 9i, Oracle 8i, Flat File
EIM Tools: Data Federator 3.0 SP1 FP1. Data Services 3.0
I am using Data federator to connect to and integrate the data sources via 1 target table. The mapping rules and target table validations are completed and the project is deployed to Query server using sysadmin logon.
Using DF Designer, I am able run the Query tool and have the rows returned in 20 mins (acceptable).
However, when I go to objects and navigate to targetSchema and access the target table content. The query resulting from this action is visible in the "Running Query" tab via the administrator interface. This query stays on the status of "Analyzing" and does not spawn out into queries on the indivdual data sources.
Is there any way, to find out what others activity is being logged.
I have looked at logging all messages in the leselect.log file. But could not find any meaningul information.
Can you help troubleshoot my problem. I am at a dead end now.
Thanks & Regards
Tiji

I have tried the following to see if the behaviour would be different but no success yet.
- Redeployed the project
- Restarting the data federator (Designer, Query server, Repository)
- While browsing through the objects via Query administrator, all other objects return a result on the "contents" tab except the table under targetSchema.
- I used a jdbc query tool to run the same query seen in the "Running query" tab and it just goes on forever.
This is the query seen on the "running Query" tab
Query ID //NHC0AP42:3055/1241037146223/2460 [cancel] 
Status  ANALYZING at 13:32:26 | Started at 13:32:26 | 0 rows read | Execution plan cache used: unknown
SQL  SELECT * FROM "/EC_Sales_By_Customer/targetSchema/Sales By Customer" 
User  Executed by user 'sysadmin' with no default catalog and no default schema 
Subqueries  On connector None 
(no execution)  SQL  None 
Status  UNKNOWN at N/A | Started at N/A | N/A rows read
Any thoughts to help troubleshoot. Could it be a bug?
Thanks
Tiji
Edited by: Tiji Mathew on Apr 29, 2009 5:40 PM

Similar Messages

  • Keeping data in server compact database AFTER deployment

    Hey!
    I have a small problem regarding SQL Server Compact and Visual Studio 2010: My application uses a database, the purpose is to manage article stocks.
    My problem now is that I have an existing excel sheet with articles, which I want to put into the database. As of now I am  changing the code for the specific excel workSheets and the respective row indexes manually. To exemplify that: I have the workSheet[1]
    and need the content of rows 20 to 178. In my code, I am changing the workSheet index and adapt my for-loop to the right row indexes. Then I start the debugging and repeat the process for another set of rows.
    I could of course create input textboxes to enter rows and worksheet indexes to my project, so after deploying it, I can enter my desired values.
    However, out of curiousity I would like to know, if there is a way to keep data I read at debugging time in my database to use it after deployment.
    Regards
    pat3d3r

    Use a full path to your database file in your connection string, and data will persist. In addition, if you have the database file as a project item set it to Copy= Never.
    You also need to think about where you want to place the file after deployment, I have some ideas here: 
    http://erikej.blogspot.dk/2013/10/sql-server-compact-4-desktop-app-with.html
    Please mark as answer, if this was it. Visit my SQL Server Compact blog http://erikej.blogspot.com

  • Data Integrator complex job hangs after workflow completion

    Post Author: Iomega
    CA Forum: Data Integration
    I have a Data Integrator complex job, and if it fails with an error and I try to rerun it from the designer, it hangs after workflow completion.  So I have to replicate the job and manually run it, removing each workflow as it completes.  Our other DI jobs don't do this, they just flow to completion.  The data warehouse is an Oracle database.  Does anyone know how to correct this?

    Post Author: Iomega
    CA Forum: Data Integration
    I have a Data Integrator complex job, and if it fails with an error and I try to rerun it from the designer, it hangs after workflow completion.  So I have to replicate the job and manually run it, removing each workflow as it completes.  Our other DI jobs don't do this, they just flow to completion.  The data warehouse is an Oracle database.  Does anyone know how to correct this?

  • Problem in reading data from serial port continuously- application hangs after sometimes

    I need to read data from two COM port and order of data appearance from COM port is not fixed. 
    I have used small timeout and reading data in while loop continously . If my application is steady for sometime it gets hangs and afterwards it doesnt receive any data again. 
    Then I need to restart my application again to make it work.
    I am attaching VI. Let me know any issue.
    Kudos are always welcome if you got solution to some extent.
    I need my difficulties because they are necessary to enjoy my success.
    --Ranjeet
    Attachments:
    Scanning.vi ‏39 KB

    billko wrote:
    Ranjeet_Singh wrote:
    I need to read data from two COM port and order of data appearance from COM port is not fixed. 
    I have used small timeout and reading data in while loop continously . If my application is steady for sometime it gets hangs and afterwards it doesnt receive any data again. 
    Then I need to restart my application again to make it work.
    I am attaching VI. Let me know any issue.
    What do you mean, "not fixed?"  If there is no termination character, no start/stop character(s) or even a consistent data length, then how can you really be sure when the data starts and stops?
    I probably misunderstood you though.  Assuming the last case is not ture - there is a certain length to the data - then you should use the bytes at port, like in the otherwise disastrous serial port read example.  In this case, it's NOT disastrous.  You have to make sure that you read all the data that came through.  Right now you have no idea how much data you just read.  Also, if this is streaming data, you might want to break it out into a producer/consumer design pattern.
    Not fixed means order is not fixed, data from any com port can come anytime. lenght is fixed, one com port have 14 byte and other 8 byte fixed..
    Reading data is not an issue for me as it works nice but I have a query that why my application hangs after sometime and stops reading data from COM PORT.
    Kudos are always welcome if you got solution to some extent.
    I need my difficulties because they are necessary to enjoy my success.
    --Ranjeet

  • My ipod cclassic 80gb is hang after trying dat press and hold method then also its not working please tell me solution

    My ipod cclassic 80gb is hang after trying dat press and hold method then also its not working please tell me solution

    Thanks for your response and good luck wishes, I suspect I will need them!
    In principle, I agree re: the manufacturer's warranty. However, I am pretty upset that this is now my second iPod to develop a critical fault within weeks of the warranty expiring, and frankly, it is not unreasonable to expect a state-of-the-art $500 electronic device to last well beyond one year of life.
    I agree talking to Apple is not likely to do me any good (the clue is in how impossible they make it to talk to them in the first place) - but that is not necessarily OK. I expect I will have to pay money to get the battery replaced - again, not OK (full stop - but especially given the cost of the device and the money I have spent with Apple). Yes, the batteries have a limited lifespan, but it should last longer than this (and surely, I should notice a gradual decline in its functionality, not an instant stop).
    I will try Deggie's suggestion (see my reply post), but probably won't hold my breath (think I have already done this). I probably will have to get the new battery - and probably under my own steam. It is a principle at stake and I feel I should be able to let Apple know how I'm feeling - and am frustrated that they make this virtually impossible. It sends the very clear message that they are not interested in listening to their customers.

  • How to change the Data sources after deploying the application ??

    Hi All,
    i want to know how to change the Data sources after deploying the application to the application server ???
    I'm using Oracle Application Server 10g Release 3 (10.1.3.1.0)

    Can you access the Enrprise Manager website of the target Application Server from your location? If so, you can change the datasource in it. If not, yo can bundle the datasource definition in your archive and use that one instead of the one configured in the target OC4J container. Or this will just be the responsability of your customer: whenever you send a new WAR file, they have to modify the datasource if needed and deploy the application?

  • Issue with data source after deploying

    We are experiencing an issue with our data source after deployment of a cube. On the datasource properties in Visual Studio 2012, we have the max connections set to 0 before the deployment. Once the cube is deployed, I can navigate to the <name>.0.ds.xml
    file and open it and see that the <MaxActiveConnections>0</MaxActiveConnections> is indeed set to 0. At some point over the next couple days, a process of the cube or some other action causes that value to get updated to some number too large to
    be converted to an int, and makes the datasource invalid. At that point we cannot view the datasource properties in SSMS, we cannot open the cube project from Visual Studio, and we’ve even had failures when trying to process the cube.  Is there a config
    somewhere that would cause this value to get overwritten, or some other behind the scenes process that we can look at?
    Our server information is:
    Microsoft SQL Server 2012 (SP1) - 11.0.3153.0 (X64)
                    Jul 22 2014 15:26:36
                    Copyright (c) Microsoft Corporation
                    Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    Chad Dotzenrod SWC | TECHNOLOGY PARTNERS 1420 Kensington Road, Suite 110 Oak Brook, Illinois 60523-2144 http://www.swc.com

    Typically you would import the metadata from the source location and either use that location as the data source (and so not need to redeploy), or deploy it to a separate target location.
    The replace action is destructive as you've found, and effectively performs a drop table followed by create table. Hence any data in the table is lost.
    If you just want the Control Center Manager to correctly display that the table is deployed, try setting the action to "Upgrade". This will try to upgrade the deployed object to match the definition in OWB, but as the two are identical this will result in no changes. However, it will update the deployment records to indicate that the object is deployed.
    Nigel.

  • Data Federator - Building Web Services Data Source - Request Guidance

    Hello -
    I am trying to build a web services data source in Data Federator.
    I have:
    1. In ECC, I have created a web service, proxy, port, endpoint, etc. and am testing it with WS Navigator
    2. I have created a Project in Data Federator of the type web service
        a. Here I have assigned the WSDL URL (generated in ECC) as the URL
        b. I have setup web service authentication using the same userid, pwd in the web services definition (note: I have set the
            web services authentication to 'NONE' to avoing authorization issues)\
        c. I am now trying to "Generate Operations"
    I keep getting the error :
    The File Access Parameters are not valid (directory path = <HERE IS THE PATH TO THE WSDL WITH THE HOST/PORT stripped OUT>; file pattern = document
    I believe that it is finding the WSDL, becuase when I change the WSDL from 'Document' to 'RPC', the error changes with the file pattern = rpc.
    What needs to be done to the file access parameter piece.
    PLEASE provide guidance.  Thanks.
    - abhi

    Ananya,
    We are using XI to load data to BI from source system. XI basically uses the same concept as Web Services - Real time data concept via RFC embeded in Proxies.
    I was using web services to test data load - eventually we will use XI.
    KJs blog is pretty good. I was able to get data loaded - after several iterations and several OSS notes. It is still not perfect - there are several manual steps . I am going to pick KJs brain on that.
    We are on SP10 and looks like there are several bugs in RSRDA. Some of these are addressed in SP11 an SP12.
    Some notes to consider are
    0001003963, 0001008276,0001009260 and 001003265.
    Let me know if you have any questions.

  • Data federator Universe refresh issue

    Hi all,
    we are facing the issue refreshing the Universe which is based on Data Federator.
    whenver the changes are done in BW Cube i.e adding the dimension or the keyfigure. the dimensionobject  is not appearing in the Universe.even we delete the dimension or a key figure from BW and then refresh the dependent tables in DF . Deploy again the project,it is updating the catalog but when we refresh the Universe it is not refreshing properly
    SO data federator and BW are in Sync but when we refresh the Universe it is not in sync
    we are using BO XI SP3  and BW 7.01 SP7
    I would really appreciate your valuable inputs regarding this issue
    Regards
    Suzane paul

    Hi Amit,
    Thanks for your reply.
    Ok. So, Universe on top of Data Federator has limited functionality.
    And, other option you mentioned is on report level. I am creating an adhoc universe and I have few objects which will calculate days between 2 dates coming from 2 different tables.
    But, how can I achieve this on Data Federator level. I have no function there to find Days Between 2 dates. I see lot of time and date functions but not the one I required. Also, I added a column in the target table and tried to apply the formula there in the default mapping area. But, I see only the selected target table. I need another date column from another table, which is not displayed in the default mapping area.
    How can I achieve this?
    Regards,
    -Peter

  • Data Federator on Unix - Need to connect to Informix

    Hi,
    We are planning to Deploy Data Federator in Linux - SuSE 64 bit environment. We also have a need to connect to Informix and Teradata databases.
    According to the supported platforms document, only ODBC drivers are available to connect to Informix and Teradata databases.
    Is there a driver bridge available for these ODBC connectivity only databases?
    Update: Didnt notice there were Unix ODBC drivers available. I think we should be fine.
    Will it be supported if wel use the Informix Type 4 JDBC driver (http://www-01.ibm.com/software/data/informix/tools/jdbc/) as a Generic JDBC driver? Is there any performance impact?
    Appreciate the assistance.
    Thanks,
    Thiag.
    Edited by: Thiag Loganathan on Jul 21, 2010 5:43 PM
    Edited by: Thiag Loganathan on Jul 21, 2010 8:26 PM

    How will you access your third-party module in a NT box from UNIX? If it will be over TCP/IP, you may use the UTL_TCP package.

  • Data federator 32/64 bit with SAP?

    We have a problem in Data federator that after the installation in "data source" there should be an SAP value that isn't there (this shold be active after SP).
    We have installed the multi platform edition in both 32 and 64 bit version, and also SP2 IA32 and 64.
    I've tries the both versions 32 and 64.
    Does anyone know how to sole this?
    regards Jacob

    To be more exact on what my colleague is describing above, the data source type that is missing is the SAP NW BW not SAP.  Also, we have SAP NW BI 7.01 level 7, DF XI 3.0 SP2 and BOE 3.1 installed.
    Thank you.
    Best regards,
    Zabrina

  • Data Federator - Changing Web Service Datasource URL

    Hi,
    I am using Data Federator XI 3.0.
    I have a few web service datasources with their WSDL URL set to the location of the web service on my development machine. If I deploy the web service on the production server, is there a way to simply update WSDL URL of the datasources in Data Federator without recreating them (copy to draft, update operations, reselect operations, etc.) ?
    I tried putting the URL in a deployment context parameter, but it did not work. The datasources still reference the old URL even when I changed the URL in the parameter.
    Thanks,
    Chih Hui

    Hi,
    If you want to configure Connections post deployment, it is required for your application to be configured with a Writable MDS Repository.As I had mentioned in the blog also, for this you need a entry in adf-config.xml and that your server should have a registered MDS store. Unless you will do this, the connection endpoint changes that you are doing will not be saved.
    To give an example, at the application end, in adf-config.xml, you need an entry like the following:
    <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">
    <mds-config xmlns="http://xmlns.oracle.com/mds/config" version="11.1.1.000">
    <persistence-config>
    <metadata-store-usages>
    <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">
    </metadata-store-usage>
    </metadata-store-usages>
    </persistence-config>
    </mds-config>
    </adf-mds-config>
    When you will deploy this app to a server with a registered MDS repository, the deployment will bring up a dialog where you will need to set(select/create) a partition for this app in the mds repository. Once, your deployment is done, now if you will change the endpoint using EM, it will save your changes.
    -Vishal

  • Data federator in  business objects

    Hi,
    What is the use of data federator in business objects.
    Regards,
    G

    HI,
    The only option in this case is to 1) use Business Objects' Data Federator to bring XYZ data and BW (new) data together, 2) creating a relational universe, and 3) creating Crystal / WEBI reports using the relational universe; see Picture 1.
    >> You can also combine the data by using a MultiProvider in BW which is access the data in BW and the data in the legacy system and an additional alternative is to move all the data into BW.
    After 20 years goes by, and we no longer need to report on data housed in XYZ anymore, the environment will be turned off.
    1) When this happens, wouldn't we want to remove Data Federator and the relational universe from the scenario ?
    >> That depends on the approach you took
    Ingo

  • Iphoto hangs after upgrading to Yosemite and latest iPhoto version

    It tells me the library needs upgrading, and starts to search the library. The progress bar suggests something is happening then hangs after about 99% completion. I have had to force close twice after several hours stuck at that point and then start the whole process from scratch when I restart.

    Have you followed these instructions - http://www.fatcatsoftware.com/iplm/Help/rebuilding%20a%20corrupted%20iphoto%20li brary.html
    iPhoto Library Manager > Help > Rebuilding a corrupted iPhoto library
    Printable help
    Rebuilding a corrupted iPhoto library
    If you have an iPhoto library that is corrupt and causing iPhoto to crash or otherwise be unusable, iPhoto Library Manager provides the ability to rebuild your library based on the information found in its library data files. Note that iPhoto also has a built-in rebuild function that can be sometimes be used to repair a corrupted library database. You can find instructions on how to use that on Apple's website at http://support.apple.com/kb/HT2638 (iPhoto 6 or later) or http://support.apple.com/kb/HT2042 (iPhoto 5 or earlier).
    iPhoto Library Manager's rebuild works differently, in that instead of trying to repair the library in place, it creates a brand new library and tries to reimport the entire contents of the original library into the new one, including reconstructing albums, photo metadata, etc. Note that rebuilding a library has all the same limitations as other photo transfer operations as far as what can and can't be copied between libraries. Also, depending on how badly damaged the library is, iPhoto Library Manager may or may not be able to piece together some or all of the library metadata.
    To start a rebuild, select the library you would like to rebuild, then choose the "Rebuild Library" item from the "Library" menu. You will be prompted to choose a location for the rebuilt library, and whether or not you want iPLM to scavenge orphaned photos it finds in the library package. Once you've made your choices, iPLM will examine the library and rebuild the library structure and photos as best it can, then display you a preview of what it was able to find. If your library is badly damaged and the preview is missing a lot of content from the original library, this can save you from going through with a rebuild that won’t end up being of much help.
    Once you've had a chance to examine the preview, if you want to go forward and create the rebuilt library, click the "Rebuild" button in the upper right. iPhoto Library Manager will create a new library and start importing the contents of the original library into the new one.
    Scavenging photos
    In some cases, either the iPhoto library database is too damaged for iPhoto Library Manager to be able to salvage any information from it, or the library data is incomplete and there are photos that still exist inside the library package, but iPhoto has lost track of them. In these cases, you may want to check the "Scavenge orphaned photos" checkbox when choosing a location for the rebuilt library. After iPhoto Library Manager has read the library data as best it can, it will perform an additional pass through the package and locate any photos that are no longer referenced in the library database. Any additional photos that are found will be included in the rebuild, and a new "Scavenged Photos" album will be created in the rebuilt library containing any scavenged photos.
    LN

  • Data Federator - Read timed out

    Hi,
    I'm using Data Federator XI 3.0 SP2. I have a datasource connecting to a web service. The web service may return take a long time to do heavy processing, etc., before returning any data. If I call the web service via Data Federator, I will get the following error
    Exception was thrown while executing a query on Data Federator Query Server.
      An exception occurred when querying Data Federator Query Server.
        [Data Federator Driver][Server]
    [Wrapper /TEST/sysadmin/sources/draft/ReturnInputAsOutput]HTTP input/output exception: Read timed out
          [Wrapper /TEST/sysadmin/sources/draft/ReturnInputAsOutput]HTTP input/output exception: Read timed out
            [Wrapper /TEST/sysadmin/sources/draft/ReturnInputAsOutput]HTTP input/output exception: Read timed out
              HTTP input/output exception: Read timed out
    I'm guessing that the Data Federator query timed out because the web service takes a long time to response, is it not? If this is the case, how do I increase timeout value? If this is not the case, then what does the error mean and how do I resolve it?
    Thanks.
    Edited by: Chih Hui Wan on Nov 19, 2009 9:15 AM

    Hi Dayanand,
    For testing purpose, I created a small simple web service with only one operation that sleep for more than two minutes before returning a string. I called this web service operation from Data Federator. After waiting for a while, I got the "Read timed out" error
    Regards,
    Chih Hui

Maybe you are looking for

  • Convert char to quantity(15) and compare

    Hi Expert!,        I have to convert Char30 format to quantity15 format and compare those to value.       I have written code below . is the right ? is there any data: x_objdata_ausp_1 TYPE char30,        l_ausp_1          type p decimals 3,        l

  • Contract data from ECC to MDM

    HI all, We have standard setup to push Contract data to catalog from ECC using MECCM transaction from ECC... and Via PI we use the standard way to FTP the file... we use standard mapping interface in PI - CatalogueUpdateNotification_Out... which in t

  • Problems with Bootcamp partition solved using program: iPartition

    I Had real BOOTCAMP partition problems: 1. I couldn't see the partition in the startup list 2. I got an error message "missing operating system" 3. All this was a result of adding an extra Mac partition for Mavericks 4. I fixed it all with a program

  • Menu buttons at the end of movie instead of at beginning?

    I created a movie in iMovie, shared it to create a DVD in iDVD, but no matter what I do, the menu and scene selection buttons end up at the end of the Movie, instead of at the beginning.  The DVD starts playing immediately when inserted in a DVD play

  • Elements 12 organiser shows on bottom of screen

    elements 12 organiser shows on bottom of screen. how to fix?