Tunneling failed in Data Service 2.5.1

We created a set of application which all use Data Services
RTMP using port 2038. In the services-config.xml we set up in the
my-rtmp channel definition to use rtmpt as the protocol. All
applications are compiled with this service-config.xml.
For most users it is working fine, however for some of our
corporate user, behind restrictive firewalls, a connection to the
data services can not be made. The application throws a generic
send failed message.
Since we are using the rtmpt protocol, what can we do to make
sure our corporate users are able to use our applications?
Thanks,
-Rogier

It seemed the problem was fixed. However, as we set rtmpt as
the only channel, our other users were experiencing a big drop off
in performance due to the tunneling.
I read a very good blog entry:
http://www.infoaccelerator.net/blog/post.cfm/setting-up-rtmpt-failover-on-lcds
where Andrew talks about using rtmpt as failover for those
users that cannot use rtmp.
We implemented the structure he described:
- channel definition for rtmp in services-config.xml
- channel definition for rtmpt in services-config.xml
- channel definition for polling-amf in services-config.xml
- define both channels as default channels in
data-management-config.xml in the order rtmp, rtmpt, polling-amf
Everybody was able to use the data services..

Similar Messages

  • Web services Connection_Operations.Logon call fails on Data Services 4.0

    Hello,
    We recently installed Data Services 4.0.  We have a custom Java app which makes use of the web services interface.  We had previously used Axis to generate the java class.  With DS 4, it looks like the Logon method did change as it now requires the cms system and authentication.  We manually updated the java classes to includes these, which has been our general approach in the past for newer releases.  However DS 4 uses Axis2 and I'm not sure that this approach would work.
    My problem is that the "Logon" call fails with very little details.
    In viewing  both the webadmin.log  and WebService.log it only reports the following:
    11/02/2011 18:42:17 [  SEVERE ] Logon failed.  Error: null
    I understand that I can control the logging detail .  The Integrator Guide for DS 4.0, has the following instruction, yet there is no log4j.properties file on my system?
    "To control the level of detail in the webadmin.log file, you must edit the log4j.properties file.
    The properties file is located in:
    LINK_DIR\ext\webserver\webapps\acta_web_admin\WEB-INF
    To obtain a debug trace of events, change the log level from the default of INFO to DEBUG. For example,
    log4j.rootLogger=DEBUG, A"
    Any advise would be greatly appreciated
    Thanks

    Hello,
    Unfortunately, stdout.log below doesn't show anything possibly due to some mis-configuaration of log4j.properties.  In my previous post I requested info on which log4j.properties should be modified as the documentation in the DS 4.0 Integrator's guide mentions a none-existent path?
    log4j:WARN No appenders could be found for logger (org.apache.commons.digester.Digester).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN No appenders could be found for logger (com.sun.faces.config.ConfigureListener).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN No appenders could be found for logger (org.apache.commons.digester.Digester).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN No appenders could be found for logger (com.sun.faces.config.ConfigureListener).
    log4j:WARN Please initialize the log4j system properly.
    com.businessobjects.webpath.rebean3ws.Activator
    log4j:WARN No appenders could be found for logger (org.apache.commons.digester.Digester).
    log4j:WARN Please initialize the log4j system properly.
    null

  • IPhone 5 cellular data services failing

    ISSUE/SYMPTOMS:  iPhone 5 ONLY: Cellular Data stops intermittingly, cellular data locks up without warning, Wi-Fi data connection not working on occasion, iCloud contacts, calendar and reminders not updating, Find My Phone not working, mail or Internet not available or server time-outs. No error messages though. All of these are symptoms of no data service. Problems seem to be related to iPhone 5 failing to switch between LTE and 4G or 3G data services on AT&T.
    My iPhone 5 32GB AT&T version (iOS 6.0.2) along with my clients and family using iPhone 5 in the state of Oklahoma are having this issue on their AT&T iPhone 5's:  no cellular data connection re-established if at anytime the LTE connection is lost or weak.  As with iPhone's prior to iPhone 5, the iPhone 5 is supposed to drop down to the next tier of data service and re-connect to cellular data service when the higher speed cellular data service is not available.  Apparently this is not working well on the iPhone 5.
    What led me to research this LTE issue was that my iPhone was not syncing on iCloud with contacts, calendars and reminders with my other iOS devices.  Other times this issue became apparent was when iMessage quit sending and receiving until I would reboot my iPhone (sometimes it would finally send the message as a “text message”). When I had these problems I also found out I was not getting any cellular data service for Internet browsing, mail, Maps, Find My iPhone, etc.  This would happen without notice. In other words I would not receive any notice or error message that data service had stopped other than the fact I was not receiving any mail, iMessages, or iCloud sync updates. My only resolution was to reset or reboot my iPhone several times a day, and that would resolve it about 70% of the time. 
    I have talked to Apple Support on 2-3 occasions in the last week and AT&T Support twice in the last 2 days.  If it is really true, only the AT&T Technical Support Advisor informed me of a nationwide issue relating to iPhone 5 and LTE:  iPhone 5's are not re-connecting to 4G or 3G cellular data service once the LTE data connection is lost or dropped... the voice calls continue to work, but cellular data service fails and stays locked up without an error message until the customer reboots the iPhone.
    Obviously this is very unacceptable. We depend on these data services and at minimum need to know when it stops working instead of finding out from a phone call that someone sent us an important email or something.
    The AT&T representative also said that both AT&T and Apple are aware of and are working on the issue as it is a nationwide problem with iPhone 5 and LTE. The Apple Support Advisors I talked to made no mention of this "known" issue. AT&T said that it is partly AT&T responsibility because the capacity of LTE is overloaded and cellular data connections are dropping because of the lack of coverage and/or capacity. They actually refunded my data service for a whole month in apology for the inconvenience (I did not even ask for that--I just want my data service to work).  But they also said that Apple has responsibility in this issue because the iPhone 5 should drop down and re-connect to the next lower tier of data service when LTE is not available or lost; instead, it is locking up somehow or it still thinks it is connected to LTE when it is actually not. I don't know how accurate these statements are, but my problem is real.
    TEMPORARY WORK-AROUND / FIX FROM AT&T:  Disable LTE on the iPhone 5.  Or when you figure out that data is not coming to your iPhone, use Airplane Mode ON/OFF for a few seconds to force re-connection to a tower without rebooting the iPhone.  You can Disable LTE on the iPhone 5 in SETTINGS/GENERAL/CELLULAR. I have disabled LTE on my iPhone 5 and have not had any of these cellular data issues since. 
    Disabling LTE is not acceptable either, but it is working. At least I seem not be losing connection or locking up without knowing it.  In my office over the course of a day, my iPhone 5 will jump back and forth on LTE and 4G sitting in the same place.  Disabling LTE has stopped this and the problem, but the iPhone 5 should switch cellular data services seemlessly just like the phone switches towers on a phone voice call seemlessly (most of the time).
    WI-FI ALSO STOPS WORKING SOMETIMES AFTER THIS LTE disconnect / lock-up issue:  That has been my experience along with my clients and family as well. If you use Wi-Fi a lot on your iPhone it will hide this LTE issue until you leave your Wi-Fi and get on LTE and then drop LTE (silently) and then go back to your Wi-Fi.  My experience has been I have to reboot the iPhone to even get Wi-Fi working again after that happens.
    I do not know if some of these comments and analysis are really accurate or valid or not... all I know for sure is since I upgraded my iPhone to the 5 along with my other family members and clients, we have had "no data" on many occasions and are continually rebooting our iPhones and looking for what is wrong. 
    My questions to the support community are these:
         1. Is anyone else having this issue? Or is it just me and my few? Maybe a lot of people do not realize they are not getting their data because they are on Wi-Fi most of the time or they are just rebooting and not trying to identify the real problem. AT&T said it was nationwide iPhone 5/LTE issue.  Anyone else heard that from either Apple or AT&T?
         2. Is this an iPhone 5 / Apple issue or is AT&T solely responsible? Or is it both? Where do we get a solution??? Is AT&T LTE holding false connections with iPhone 5 or is the iPhone 5 failing to recognize a dropped LTE cellular data connection and then failing to attempt to re-connect with 4G or 3G?
         3. Did I miss where this issue is posted? I searched Apple Support and Communities and did not get any hits.  Any updates or further insight on the issue? Any other fixes other than Disabling LTE?
          4. If this is in fact a real issue belonging in part to the iPhone 5, we need to escalate Apple’s attention to it.
    Thanks in advance for your help and feedback.

    Thank you fellow users and experts for your feedback and updates. Sorry about the large font on the original post... accidental via cut and paste.
    I have escalated the issue with both Apple and AT&T.  I actually got senior level troubleshooters from both Apple and AT&T on the phone at the same time... rare moment. Here are the key points of the update:
         1. AT&T did NOT confirm the earlier statement by another AT&T Technical Support Agent that this was an emerging nationwide issue about iPhone 5 and LTE losing data connection and not reconnecting at a lower level or re-connecting to LTE once the signal came back. So that earlier statement was not accurate as suspected by all of us. The senior tech at AT&T said he has seen this issue sporadically over the last month or so, but only with about 3-4 customers.  He said it seems to be location or phone dependent (some phones not all phones).
         2. The Apple Tech was satisfied that all that could be done to eliminate my issues possibly relating to my instance of software / iOS had been accomplished except erasing my iPhone and restoring as a new phone (I have avoided this inconvenience with the argument that other family/clients with the iPhone 5 have had the same problem).
         3. AT&T went so far as to examine which towers I had been connecting to over the last week and looked at other technical data that could possibly explain the issue. I now know I have 3 towers that provide LTE within 4 miles of my home/office, one within a half mile.  But so far there is nothing to explain or even properly identify the issue.  AT&T offered me to take my phone to an AT&T store, try to force replicate the problem there, if so, the store would put my SIM card in a brand new iPhone 5 (just to test) and try tor replicate the problem again. If replicated on a new phone, there is a deeper issue; if resolved on a new phone, the answer is obvious. I have not taken the time to do this yet.
         4. On Friday, Jan 4, 2013 while talking and between the tech support calls (on a different phone), I was able to re-create the problem 2 separate times in my home/office by forcing my iPhone to weaken its connection to service (burying it under electronic gear) and then waiting a few minutes, then trying to browse the Internet--no data service unless I manually reset the connection.  I am only able to cause the issue to occur if there is at least 15 min of idle time after a weak signal condition AND only if I don’t completely lose a connection to cellular voice service. It seems that if you go down to NO SERVICE the phone will do new a search and keep searching until reconnects to LTE or other signals correctly, but a weak connection seems to be strong enough for voice service but can sometimes cause the LTE data service to stop without reconnecting to LTE, 4G or 3G unless I manually intervene. It also seems that idle time with a weak connection is also necessary to re-create the problem manually.  Since Friday, I have not been able to re-create the problem. This has been an intermittent problem, so who knows.
    FYI, I have never had this problem with my new iPad (also on AT&T LTE) that has been with me when the iPhone fails to have data service, but my iPad is never in my pocket or other places with perhaps less signal.
    Again, if you are connected to Wi-Fi on your iPhone 5 most of the time, you may never notice this issue if you are even having it.
    So far no progress on the real issue or resolution, but both Apple and AT&T have put forth very admirable effort.
    If you or others you know are having this issue, please take the time to post a reply to this issue. Thank you.

  • Wcf Data Service fails when more than 8properties  in the 'select=' portion

    Hi:
    I am using WCF Data Service and Oracle
    EF Provider is ODAC11.2 Release 4
    Wcf Data Service fails when more than 8 properties are specified in the 'select=' portion of the URI
    here is my code
    var q = from c in this.ctx.SALESORDER_ITEM
    select new
    c.SORDERDETAILID,
    c.IID,c.DMFLAG,c.OWNERID,c.SKUID,c.SKU_ID,c.TRADENO,c.SOURCEID,c.SORDERID
    excetion:
    InvalidOperationException: An error occurred for this query during batch execution. See the inner exception for details
    The inner exception is null, but the DataServiceClientException states: Value cannot be null Parameter name: value
    the exception is thrown in base.OnStartProcessingRequest(args) method (overridden).
    Here is the call stack as well:
    at System.Data.Services.WebUtil.CheckArgumentNull[T](T value, String parameterName)
    at System.Data.Services.Internal.ProjectedWrapper.set_PropertyNameList(String value)
    at lambda_method(Closure , Shaper )
    at System.Data.Common.Internal.Materialization.Coordinator`1.ReadNextElement(Shaper shaper)
    at System.Data.Common.Internal.Materialization.Shaper`1.SimpleEnumerator.MoveNext()
    at System.Data.Services.Internal.ProjectedWrapper.EnumeratorWrapper.MoveNext()
    at System.Data.Services.DataService`1.SerializeResponseBody(RequestDescription description, IDataService dataService)
    at System.Data.Services.DataService`1.HandleNonBatchRequest(RequestDescription description)
    at System.Data.Services.DataService`1.HandleRequest()
    Is there a max number of properties in $select statement
    I think may be it is oracle provider's problem ,but i don't konw how to debug it Can anyone help me
    Any help is greatly appreciated

    I believe the null/empty string issue is unrelated to the 8 column issue, at least for ODP.NET. For example, let's take the original query in the OBE:
    http://.../yoursvcfile.svc/EMPLOYEES?$select=EMPLOYEE_ID,FIRST_NAME,LAST_NAME,SALARY,DEPARTMENT_ID,DEPARTMENT,EMAIL,PHONE_NUMBER,MANAGER_ID
    Let's make all the columns selected not nullable. You can do this with the Oracle Dev Tools. Specifically, PHONE_NUMBER and FIRST_NAME are the only nullable fields. I make them non-nullable and re-run the query and the same error occurs. Thus, these values should never be made null. Moreover, in all 107 rows, none of these row values consist of empty strings anyway.
    Looking into the problem further, WCF DS is calling methods in the System.Data.Services.Internal namespace.
    http://msdn.microsoft.com/en-us/library/system.data.services.internal.aspx
    Specifically, we see your issue when the ProjectedWrapperMany method is called. You will notice that there is ProjectedWrapper0, ProjectedWrapper1...ProjectedWrapper8 methods also present in the same namespace. As soon as the number of columns exceeds 8, ProjectedWrapperMany is called and we see the error. We're going to ask MS to help analyze the issue since this is an .NET-internal method being called.

  • Data Services 12.2.3.0 BODI-1112015 Adapter metadata import failed

    Hi Experts,
    I am using Data Services 12.2.3.0.
    I have an issue in importing functions through 'Adapter' type datastore into Data Services. I can open the datastore and see the list of functions available, but when I try to import them, I get the error BODI-1112015 Adapter metadata import failed.
    The setup and the errors are as below.
    The adapter datastore is setup as below.
    I built a new keystore called clientkeystore.jks in the ..\bin.Then created the .CSR file, and then imported the signed chained (I believe it's chained certificate) certificate of the server hosting the wsdl into the keystore.
    Thanks for the post http://scn.sap.com/thread/1589052 . After changing the metadata character set to utf-8, I can see a list of functions when I open this New_Datastore in Data Services. It proves that the setup for the datastore has no problem parsing the wsdl file and give me the list of functions in it. 
    However, the error appears when I try to import them.
    Error is:
    Adapter metadata import failed. Error message: (BODI-1112015) Error parsing the <TheFunctionToBeImported> included in the XML sent by the adapter to represet a function <Error importing XML Schema from file <adapter_schema_in.xsd>:<XML parser failed: Error <Schema Representation Constraint: Namespace 'http://result.form.v81.api.keysurvey.com' is referenced without <import> declaration> at line <13>, char <46> in < < xsd:schema xmln:xsd=http://www.w3.org/2001/XMLSchema" xmln:tns="http://result.form.v81.api.keystore.com" xmlns:diws="http://businessobjects.com/diwebservice" targetnamespace="http://www.businessobjects.com/diwebservice"><xsd:import namespace='http://v81.api.keysurvey.com' schemaLocation='C:\Program Files\Business Objects\BusinessObjects Data Services\ext\webservice\FormResultManagemenetgetRespondentsgetRespondents0.xsd'/>
    <xsd: import namespace='http://result.form.v81.api.keysurvey.com' schemaLocation='C:\Program Files\Business Objects\BusinessObjects Data Services\ext\webservice\FormResultManagemenetgetRespondentsgetRespondents2.xsd'/> ........
    When comparing it with the wsdl file(as below), it is worth nothing that the schemaLocation is changed to a local directory under C:\Program Files\Business Objects\BusinessObjects Data Services\ext\webservice  while it was not the case in wsdl. The schemaLocation is on the server.
    I am wondering if the redirection from the server specified in the wsdl file to the local directory has caused this error. The error 'namespace is reference without <import>' is apparently wrong as the <import> is just there.
    Or there is any other reason behind this.
    I appreciate any adivce or question from you!

    I have reached the exact same error as this post http://scn.sap.com/thread/3190403
    The error is
    [Mon Jun 18 23:14:28 2012] [error] ..\..\src\core\deployment\conf_builder.c(876) Specifyingservices and modules directories using axis2.xml but path of the library directory is not present
    [Mon Jun 18 23:14:28 2012] [error] ..\..\src\core\deployment\conf_builder.c(261) Processing transport senders failed, unable to continue
    [Mon Jun 18 23:14:28 2012] [error] ..\..\src\core\deployment\dep_engine.c(939) Populating Axis2 Configuration failed
    [Mon Jun 18 23:14:28 2012] [error] ..\..\src\core\deployment\conf_init.c(195) Loading deployment engine failed for client repository C:\Program Files (x86)\SAP BusinessObjects\Data Services\ext\webservice-c\axis2.xml
    As it is identified as an version problem, this issue is not going to be investigated any further.
    As an alternative, can try to use Oracle 11g SOAP_API.sql.

  • Data services job failes while insert data into SQL server from Linux

    SAP data services (data quality) server is running on LInux server and Windows. Data services jobs which uses the ODBC driver to connect to SQL server is failing after selecting few thousand records with following reason as per data services log on Linux server. We can run the same data services job from Windows server, the only difference here is it is using SQL server drivers provided by microsoft. So the possible errors provided below, out of which #1 and #4 may not be the reason of job failure. DBA checked on other errors and confirmed that transaction log size is unlimited and system has space.
    Why the same job runs from Windows server and fails from Linux ? It is because the ODBC drivers from windows and Linux works in different way? OR there is conflict in the data services job with ODBC driver.
    ===== Error Log ===================
    8/25/2009 11:51:51 AM Execution of <Regular Load Operations> for target <DQ_PARSE_INFO> failed. Possible causes: (1) Error in the SQL syntax;
    (2)6902 3954215840 RUN-051005 8/25/2009 11:51:51 AM Database connection is broken; (3) Database related errors such as transaction log is full, etc.; (4) The user defined in the
    6902 3954215840 RUN-051005 8/25/2009 11:51:51 AM datastore has insufficient privileges to execute the SQL. If the error is for preload or postload operation, or if it is for
    ===== Error Log ===================

    this is another method
    http://www.mssqltips.com/sqlservertip/2484/import-data-from-microsoft-access-to-sql-server/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Restart Data Services 3.2 fails

    I can not access  trace log, monitor log and error log files. Every time accessing through links I get a time out error. The Server is available and accessible. I just tried to restart the BusinessObjects Data Service - Server through the BO-DS-Servermanager. This has not worked. I can not restart the service "BusinessObjects Data Services".
    This Error occures : Error 1053: The service did not respond to the start or control request in a timely fashion"
    This happened sometime last months. When I restart the server then all is ok again.
    How can I restart the service without reboot?

    Hello,
    1. What is the version of the Microsoft .NET Framework? (Control Panel > Add/Remove Programs).
    2. What is the error message in Event Viewer (Start > Run > eventvwr.msc)?
    3. What user run this service (Start > Run > services.msc > Business Objects Data Services > Properties > Log On tab)?
    4. Is Data Execution Prevention disabled? (My Computer > Properties > Advanced > Performance > Settings > Data Execution Prevention tab)?
    Thank you,
    Viacheslav.

  • Data Services 4.1 upgrade failed

    Hi Everyone,
    We came across a very really strange situation when we try to upgrade our DS from 4.0 SP3 to DS 4.1 SP2.
    Before the upgrade, our DSConfig.txt file is in the bin folder where DS is installed (E:\SAP BusinessObject\Data Services\bin).  After the upgrade, we discovered a second DSConfig.txt file was created and saved to C:\Program Files\SAP BusinessObjects\Data Services\conf folder.  How is this possible when during the upgrade we specifically pointed to E:\SAP BusinessObject location?  More importantly which DSConfig.txt file should we use?  Right now we can't connect to our repositories through the CMC.
    TIA

    Also, do not worry about the change in the directory. In the release notes it  is mentioned that Linked path is changed to common directory path.
    Configuration 
    Old location:
    <LINK_DIR>/bin/DSConfig.txt
    New Location:
    <DS_COMMON_DIR>/conf/DSCon
    fig.txt
    It is weird why SAP has to change the paths. I think your upgrade went well.
    Please check the release notes documents for other changes.
    http://help.sap.com/businessobject/product_guides/sboDS41/en/sbo411_ds_whats_new_en.pdf
    Good Luck

  • Failed to open service OracleDEV102TNSListener , error 1060

    Dear Consultants,
    I have been trying to install ECC6 , Oracle on Windows 2003, and my Hardware is RAM --2 GB
    HDD -- 300GB
    I am having intel duo core processor and iam using Gigabyte motherboard,Intel chipset.
    My Installations stops at Import_Abap phase, following are the logs ,
    <b>ERROR 2007-03-10 04:47:46
    CJS-30022  Program 'Migration Monitor' exits with error code 103. For details see log file(s) import_monitor.java.log, import_monitor.log.
    ERROR 2007-03-10 04:47:47
    FCO-00011  The step runMigrationMonitor with step key |NW_Onehost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CreateDBandLoad|ind|ind|ind|ind|9|0|NW_ABAP_Import_Dialog|ind|ind|ind|ind|5|0|NW_ABAP_Import|ind|ind|ind|ind|0|0|runMigrationMonitor was executed with status ERROR .
    INFO 2007-03-10 05:03:18
    An error occured and the user decide to stop.\n Current step "|NW_Onehost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CreateDBandLoad|ind|ind|ind|ind|9|0|NW_ABAP_Import_Dialog|ind|ind|ind|ind|5|0|NW_ABAP_Import|ind|ind|ind|ind|0|0|runMigrationMonitor</b>".
    I have set the all the Environment variables , i run r3trans -d and the result is
    <b>C:\Documents and Settings\devadm>r3trans -d
    This is r3trans version 6.13 (release 700 - 20.02.06 - 16:15:00).
    unicode enabled version
    2EETW169 no connect possible: "connect failed with DBLI_RC_LOAD_LIB_FAILED."
    r3trans finished (0012).</b>
    <b>I also run r3trans -x and the result is.
    4 ETW000  [dbsloci.    ,00000]  *** ERROR => CONNECT failed with sql error '12541'
    4 ETW000                                                                              21  1.098448
    4 ETW000  [dev trc     ,00000]  Try to connect with default password                 142  1.098590
    4 ETW000  [dev trc     ,00000]  Connecting as SAPSR3/<pwd>@DEV on connection 0 (nls_hdl 0) ... (dbsl 700 240106)
    4 ETW000                                                                              22  1.098612
    4 ETW000  [dev trc     ,00000]  Nls CharacterSet                 NationalCharSet              C      EnvHp      ErrHp ErrHpBatch
    4 ETW000                                                                              23  1.098635
    4 ETW000  [dev trc     ,00000]    0 UTF8                                                      1   0244CF08   0245243C   02451CC4
    4 ETW000                                                                              24  1.098659
    4 ETW000  [dev trc     ,00000]  server_detach(con_hdl=0,stale=0,svrhp=02463484)       14  1.098673
    4 ETW000  [dev trc     ,00000]  Detaching from DB Server (con_hdl=0,svchp=02451C10,srvhp=02463484)
    4 ETW000                                                                              19  1.098692
    4 ETW000  [dev trc     ,00000]  Deallocating server context handle 02463484           16  1.098708
    4 ETW000  [dev trc     ,00000]  Allocating server context handle                      15  1.098723
    4 ETW000  [dev trc     ,00000]  Attaching to DB Server DEV (con_hdl=0,svchp=02451C10,svrhp=02463484)
    4 ETW000                                                                              29  1.098752
    4 ETW000  [dev trc     ,00000]  Tue Mar 13 04:38:09 2007                         1093335  2.192087
    4 ETW000  [dboci.c     ,00000]  *** ERROR => OCI-call 'OCIServerAttach' failed: rc = 12541
    4 ETW000                                                                              19  2.192106
    4 ETW000  [dbsloci.    ,00000]  *** ERROR => CONNECT failed with sql error '12541'
    4 ETW000                                                                              19  2.192125
    4 ETW000  [dblink      ,00431]  ***LOG BY2=>sql error 12541  performing CON [dblink#3 @ 431]
    4 ETW000                                                                             176  2.192301
    4 ETW000  [dblink      ,00431]  ***LOG BY0=>ORA-12541: TNS:no listener [dblink#3 @ 431]
    4 ETW000                                                                              19  2.192320
    2EETW169 no connect possible: "DBMS = ORACLE                           --- dbs_ora_tnsname = 'DEV'"</b>
    Event Viewer result is
    <b>----
    Faulting application isqlplussvc.exe, version 1.0.7.0, faulting module msvcrt.dll, version 7.0.3790.0, fault address 0x0003113b.</b>
    finally in SAPINST logs i could see
    <b>
    Starting tnslsnr: please wait...
    Failed to open service <OracleDEV102TNSListener>, error 1060.
    TNSLSNR for 32-bit Windows: Version 10.2.0.1.0 - Production
    System parameter file is C:\oracle\DEV\102\network\admin\listener.ora
    Log messages written to C:\oracle\DEV\102\network\log\listener.log
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=
    .\pipe\DEV.WORLDipc)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=
    .\pipe\DEVipc)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=DEV)(PORT=1527)))
    Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=DEV.WORLD))
    STATUS of the LISTENER
    Alias                     LISTENER
    Version                   TNSLSNR for 32-bit Windows: Version 10.2.0.1.0 - Production
    Start Date                10-MAR-2007 01:29:29
    Uptime                    0 days 0 hr. 0 min. 3 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   C:\oracle\DEV\102\network\admin\listener.ora
    Listener Log File         C:\oracle\DEV\102\network\log\listener.log
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=
    .\pipe\DEV.WORLDipc)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=
    .\pipe\DEVipc)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=DEV)(PORT=1527)))
    Services Summary...
    Service "DEV" has 1 instance(s).
      Instance "DEV", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully</b>
    Please suggest for the possible workaround, as you can see i have tried everything.
    Regards,

    Dear Umesh,
    I have triggered the Installation again, still the problem remains unresolved,
    PLEASE CONFIRM THAT I AM USING INTEL CORE DUO PROCESSOR , does it have any dependcies or prerequisites.
    pasting the last few lines of  logs of SAPAPPL1_15, where i got the error in ABAP IMPORT.
    DbSl Trace: ORA-4031 occurred when executing SQL statement (parse error offset=0)
    (DB) ERROR: DDL statement failed
    (CREATE UNIQUE INDEX "VDBEPI_EU~0" ON "VDBEPI_EU" ( "MANDT", "BUKRS", "RBELKPFD", "RPOSNR" ) TABLESPACE PSAPSR3 STORAGE (INITIAL 16384 NEXT 0000002560K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
      (SQL error 4031)
      error message returned by DbSl:
    ORA-04031: unable to allocate 40 bytes of shared memory ("shared pool","CREATE UNIQUE INDEX "VDBEPI_...","Typecheck","qcsqlpath: qcsAddSqlPath")
    (DB) INFO: disconnected from DB
    E:\usr\sap\NAJ\SYS\exe\uc\NTI386\R3load.exe: job finished with 1 error(s)
    E:\usr\sap\NAJ\SYS\exe\uc\NTI386\R3load.exe: END OF LOG: 20070517090126
    ===================================================
    IMPORT_MONITOR.JAVA logs.
    Import Monitor jobs: running 2, waiting 35, completed 34, failed 1, total 72.
    Loading of 'SAPAPPL1_15' import package: ERROR
    Import Monitor jobs: running 1, waiting 35, completed 34, failed 2, total 72.
    Loading of 'SAPAPPL2_5' import package: ERROR
    Import Monitor jobs: running 0, waiting 35, completed 34, failed 3, total 72.
    Import Monitor jobs: running 1, waiting 34, completed 34, failed 3, total 72.
    Import Monitor jobs: running 2, waiting 33, completed 34, failed 3, total 72.
    Import Monitor jobs: running 3, waiting 32, completed 34, failed 3, total 72.
    Loading of 'D021T' import package: OK
    Import Monitor jobs: running 2, waiting 32, completed 35, failed 3, total 72.
    Import Monitor jobs: running 3, waiting 31, completed 35, failed 3, total 72.
    Loading of 'SAPAPPL1_12' import package: OK
    Import Monitor jobs: running 2, waiting 31, completed 36, failed 3, total 72.
    Import Monitor jobs: running 3, waiting 30, completed 36, failed 3, total 72.
    Loading of 'SAPSSEXC_4' import package: OK
    Import Monitor jobs: running 2, waiting 30, completed 37, failed 3, total 72.
    Import Monitor jobs: running 3, waiting 29, completed 37, failed 3, total 72.
    Loading of 'SAPSSEXC_6' import package: OK
    Import Monitor jobs: running 2, waiting 29, completed 38, failed 3, total 72.
    Regards,

  • Flash Builder 4.5 Data Services Wizard, setting up REST service call returns Internal Error Occurred

    Dear all -
    I am writing with the confidence that someone will be able to assist me.
    I am using the Flash Builder Data Services Wizard to access a Server that utilizes REST type calls and returns JSON objects. The server is a JETTY server and it apparantly already works and is returning JSON objects (see below for example). It is both HTTP and HTTPS enabled, and right now it has a cross-domain policy file that is wide open (insecure but its not a production server, it's internal).
    The crossdomain file looks like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
    <cross-domain-policy>
       <allow-http-request-headers-from domain="*" headers="*" secure="false"   />
       <allow-access-from domain="*" to-ports="*" secure="false"/>
       <site-control permitted-cross-domain-policies="master-only" />
    </cross-domain-policy>
    The crossdomain file is in the jetty server's root directory and is browseable via HTTP and HTTPS (i.e. browsing to it returns the xml)
    Now before all of you say that using wizards sucks (generally) I thought I would utilize the FB Data Services Wizard as at least it would provide a template for which I could build additional code against, or replace and improve the code it produces.
    With that in mind, I browse to the URL of the Jetty Server with any web browser (for example, Google Chrome, Firefox or IE) with a URL like this (the URL is a little confidential at the moment, but the structure is the same)
    https://localhost:somePort/someKey/someUser/somePassword/someTask
    *somePort is the SSL port like 8443
    *someKey is a key to access the URL's set of services
    returns a JSON object as a string in the web browser and it appears like the following:
    {"result":success,"value":"whatEverTheValueShould"}
    Looks like the JSON string/object is valid.
    I went through the Flash Builder Data Services Wizard to set up HTTP access to this server. The information that I filled in is described below:
    Do you want to use a Base URL as a prefix for all operation URLs?
    YES
    Base URL:
    https://localhost:8443/someKey/
    Name                    : someTask
    Method                    : POST
    Content-Type: application/x-www-form-urlencoded
    URL                              : {someUser}/{somePassword}/someTask
    Service Name: SampleRestapi
    Services Package: services.SampleRestapi
    datatype objects: valueObjects:
    Completing the wizard, I run the Test Operation command. Remember, no authentication is needed to get a JSON string.
    It returns:
    InvocationTargetException: Unable to connect to the URL specified
    I am thinking - okay, but the URL IS browseable (as I originally was able to browse to it, as noted above).
    I continue to test the service by creating a Flex application that accepts a username and password in a form. when the form is submitted, the call to the service is invoked and an event handler returns the result. The code is below (with some minor changes to mask the actual source).
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
                                     xmlns:s="library://ns.adobe.com/flex/spark"
                                     xmlns:mx="library://ns.adobe.com/flex/mx"
                                     xmlns:SampleRestapi="services.SampleRestapi.*"
                                     minWidth="955" minHeight="600">
              <fx:Script>
                        <![CDATA[
                                  import mx.controls.Alert;
                                  import mx.rpc.events.ResultEvent;
                                  protected function button_clickHandler(event:MouseEvent):void
                                            isUserValidResult.token = SampleRestAPI.isUserValid(userNameTextInput.text,passwordTextInput.text);
                                  protected function SampleRestAPI_resultHandler(event:ResultEvent):void
                                            // TODO Auto-generated method stub
                                            // print out the results
                                            txtAreaResults.text = event.result.message as String;
                                            // txtAreaResults.appendText( "headers \n" + event.headers.toString() );
                        ]]>
              </fx:Script>
              <fx:Declarations>
                        <SampleRestapi:SampleRestAPI id="SampleRestAPI"
                                                                                                 fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)"
                                                                                                 result="SampleRestAPI_resultHandler(event)"
                                                                                                 showBusyCursor="true"/>
                        <s:CallResponder id="isUserValidResult"/>
                        <!-- Place non-visual elements (e.g., services, value objects) here -->
              </fx:Declarations>
              <s:Form defaultButton="{button}">
                        <s:FormItem label="UserName">
                                  <s:TextInput id="userNameTextInput" text="q"/>
                        </s:FormItem>
                        <s:FormItem label="Password">
                                  <s:TextInput id="passwordTextInput" text="q"/>
                        </s:FormItem>
                        <s:Button id="button" label="IsUserValid" click="button_clickHandler(event)"/>
                        <s:FormItem  label="results:">
                                  <s:TextArea id="txtAreaResults"/>
                        </s:FormItem>
              </s:Form>
    </s:Application>
    It's a simple application to be sure. When I run it , I get the following returned in the text area field txtAreaResults:
    An Internal Error Occured.
    Which is equivalent to the following JSON string being returned:
    {"success":false,"value":"An Internal Error Occured"}
    It appears that the call is being made, and that a JSON object is being returned... however it does not return the expected results?
    Again the URL constructed is the same:
    https://www.somedomain.com:somePort/someKey/someUser/somePassword/someTask
    So I am wondering what the issue could be:
    1) is it the fact that I am browsing the test application from an insecure (http://) web page containing the Flex application and it is accessing a service through https:// ?
    2) is the JSON string structurally correct? (it appears so).
    3) There is a certificate enabled for HTTPs. it does not match the test site I am using ( the cert is for www.somedomain.com but I am using localhost for testing). Would that be an issue? Google Chrome and IE just asks me to proceed anyway, which I say "yes".
    Any help or assistance on this would be appreciated.
    thanks
    Edward

    Hello everyone -
    Since I last posted an interesting update happened. I tested my  Flex application again, it is calling a Jetty Server that returns a JSON object, in different BROWSERS.  I disabled HTTPS for now, and the crossdomain.xml policy file is wide open for testing (ie. allowing every request to return data). So the app accessing the data using HTTP only. Browsers  -  IE, Opera, Firefox and Chrome. Each browser contained the SAME application, revision of the Flash Player (10.3.183.10 debugger for firefox, chrome, opera, safari PC; 11.0.1.129 consumer version in IE9,) take a look at the screen shot (safari not shown although the result was the same as IE and chrome)
    Note that Opera and Firefox returned successful values (i.e. successful JSON objects) using the same code generated from the Data Services Wizard. Chrome, IE and, Safari failed with an Internal error. So I am left wondering - WHY? Is it something with the Flash Player? the Browsers?  the Flex SDK? Any thoughts are appreciated. Again, the code is found in the original thread above.

  • Setup Cluster using Solaris Container data service

    We have a two-node cluster that we would like to use to create either a zone cluster or use the Solaris Container data service that would create a scalable (or multiple master) data service of two zones, one on each node. We have an app running in the zone, CiscoWorks, that has a local database of jobs that are scheduled to run to configure Cisco switches. I was curious how we setup the storage. If each zone is running on local disks, how do the zones stay in sync and the database updated with job information? Would I setup a device group of the disks where the zones will reside on each node? Can I use SAN as the local disk so the zones can be replicated to a Disaster Recovery location?
    Thanks for any help,
    Chuck

    Chuck,
    Sadly, I think I'm going to make your implementation decisions a lot more complicated because there are three ways you can use zones within Solaris Cluster.
    1. Create a failover zone using the HA Solaris Container Data Service. Here the zone root moves between the cluster nodes as the zone fails over.
    2. Create static zones between with resource groups can migrate. Each zone root is local the the physical node. However, the configuration of the zones can be subtly different.
    3. Create a virtual cluster using static zones within which resource groups can migrate. Each zone root is local the the physical node. However, the configuration of the zones are forced to be the same.
    Note also, that a ZFS zpool can only be mounted on one node or zone at anyone time, although it can be mounted read/write in one zone and read-only in other zones on the same node (IIRC).
    I would be inclined to put your database into an HA configuration, i.e. one that runs on one node at any one time. I would then constrain that in a zone cluster that is bound to a project with restricted resources, i.e. CPUs and memory. Any other tiers of the application, could then be placed either in the global zone (main cluster) or placed in another zone cluster and equally constrained.
    I don't know if that's any help. I can recommend a good book on the subject <shameless plug "Oracle Solaris Cluster Essentials"/>. The example chapters should be of help.
    Regards,
    Tim
    ---

  • Data Services 4.0 installation with MS SQL 2008

    hey guys,
    I am trying to install data services 4.0 on a windows server with MS SQL database as the repository database. So, my plan was to install MS SQL 2008 first, then IPS, then data services. But i am facing a problem in installation of MS SQL itself. I installed MS SQL with following features - database engine, client tools and management tools. The installation process was smooth and the sql server services (server and agent) both startup just fine. I installed a custom instance for the same.
    But problem is when I open the management studio and try to connect to the database engine, it does not connect, I choose database engine as the server type, local hostname as the server name and windows authentication as the authentican type but it fails with the following error:
    TITLE: Connect to Server
    Cannot connect to <myservername>.
    ADDITIONAL INFORMATION:
    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)
    For help, click: http://go.microsoft.com/fwlink?ProdName=MicrosoftSQLServer&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476
    BUTTONS:
    OK
    Any clues as to what is causing this ??? Or is it normal at this stage of installation ??  I am confused. I have already tried to reinstall the server, no good, nothing in the event logs either.
    Thanks,
    RS

    resolved via following:
    In the TCP/IP Properties dialog box, on the IP Addresses tab, several IP addresses appear in the format IP1, IP2, up to IPAll. One of these is for the IP address of the loopback adapter, 127.0.0.1. Additional IP addresses appear for each IP Address on the computer. Right-click each address, and then click Properties to identify the IP address that you want to configure.
    If the TCP Dynamic Ports dialog box contains 0, indicating the Database Engine is listening on dynamic ports, delete the 0.u201D
    Once I deleted those zeros and put in 1433 for TCP ports on each IP address, voila!

  • Data Services 4.0 Job Server ODBC DB2 library error

    I have followed the instructions in the SBO 401 DS Admin Guide for configuring ODBC data sources on UNIX. I have gotten the unixODBC part working from the LINUX DS Job server host using isql DNS UID PWD. I am able to connect to the DB2 9.7 UDB server and I run queries successfully. My problem is when I go to submit a column profile request I get the following error back in the profiler monitor:
    ErrorSystem call <LoadLibrary> to load and initialize functions failed for <libdb2.so>. Ensure the shared library is installed and located correctly.
    This library is contained in the /opt/sapds/DB2_Conn/odbc_cli/clidriver/lib directory which is contained in the LD_LIBRARY_PATH. I have these 2 entries in the ds_odbc.ini file:
    [DB2TEST]
    Driver = /opt/sapds/DB2_Conn/odbc_cli/clidriver/lib/libdb2.so
    [DB2]
    Description = IBM DB2 Adapter
    Driver = /opt/sapds/DB2_Conn/odbc_cli/clidriver/lib/libdb2.so
    There is a troubleshooting section in the admin guide that states "To determine whether all dependent libraries are set properly in the environment variables, you can use the ldd command on the ODBC driver manager library and the ODBC driver library."
    For example: ldd tdata.so
    Is this a real thing to check or just an example?
    When I check the db2 library I receive the following:
    $ ldd libdb2.so
            linux-vdso.so.1 =>  (0x00007fffd13fc000)
            libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002b5ab3d36000)
            libdl.so.2 => /lib64/libdl.so.2 (0x00002b5ab3f6e000)
            libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b5ab4172000)
            librt.so.1 => /lib64/librt.so.1 (0x00002b5ab438e000)
            libpam.so.0 => /lib64/libpam.so.0 (0x00002b5ab4597000)
            libm.so.6 => /lib64/libm.so.6 (0x00002b5ab47a2000)
            libstdc+.so.6 => /usr/lib64/libstdc+.so.6 (0x00002b5ab4a26000)
            libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b5ab4d26000)
            libc.so.6 => /lib64/libc.so.6 (0x00002b5ab4f34000)
            /lib64/ld-linux-x86-64.so.2 (0x0000003791400000)
            libaudit.so.0 => /lib64/libaudit.so.0 (0x00002b5ab528d000)
    I believe that the DB2 ODBC DNS is working correctly because I can connect and run queries from the server host.
    Does anyone have an idea on what I might check to get this working? Should I open up a support message for further help? Let me know, thanks.
    Additionally, we have our Oracle profiler job working without issue.
    But, we are having another issue in the Data Services Designer when leaving the options screen we get the following error:
    "The Job Server in not responding. Cannot save modifications to the job execution options. Make sure the Job Server is running and accessible from the Designer, and then make make your changes again. (BODI - 1260016)." I do not think that this is related to running the DB2 profiler jobs because, as I said we can run Oracle jobs without issue.
    Edited by: John Joiner on Oct 18, 2011 1:31 AM

    Hey John
    We are having a similar issue. Can you please let me know how the issue was fixed (i.e. what were the missing pieces in the setup)?
    Thanks!

  • ORA-00054 error when loading Oracle table using Data Services

    Hello,
    we are facing ORA-00054 error when loading Oracle table using BO Data services
    (Oracle 10g database, BODS Xi 3.2 SP3)
    Test Job performs
    1- truncate table
    2- load table (tested in standard and bulk load modes)
    Scenario when issue happens is:
    1- Run loading Job
    2- Job end in error for any Oracle data base error
    3- When re-running the same Job, Job fails with following error
         ORA-00054: resource busy and acquire with NOWAIT specified
    It seems after first failure, Oracle session for loading the table stays active and locks the table.
    To be able to rerun the Job, we are forced need to kill Oracle session manually to be able to run the Job again.
    Expected behaviour would be : on error rollback modifications made on table and BODS stops Oracle session in a clean way.
    Can somebody tell me / or point me to any BODS best practice about Oracle error handling to prevent such case?
    Thanks in advance
    Paul-Marie

    the ora-0054 can occure depending how the job failed before. If this occures you will need the DBA to release the lock on the table in question
    Or
           AL_Engine.exe on The server it creates the Lock. Need to Kill Them. Or stop it..
    This Problem Occurs when we select The Bulkloading Option in orclae  We also faced the same issue,Our admin has Killed the session. Then everything alright.

  • Data services with SQL Server 2008 and Invalid time format variable

    Hi all
    Recently we have switched from DI on SQL Server 2005, to DS(Date Services) on SQL Server 2008. However I have faced an odd error on the query that I was running successfully in DI.
    I validate my query output using a validation object to fill either Target table (if it passes), or the Target_Fail table (if it fails). Before sending data to the Target_Fail table, I map the columns using a query to the Target_Fail table. As I have a column called 'ETL_Load_Date' in that table, which I should fill it with a global variable called 'Load_Date'. I have set this global variable in the script at the very first beginning of the job. It is a data variable type:
    $Load_Date = to_char(sysdate(),'YYYY.MM.DD');
    When I assign this global variable to a datetime data type cloumn in my table and run the job using Data Services, I get this error:
    error message for operation <SQLExecute>: <[Microsoft][ODBC SQL Server Driver]Invalid time format>.
    However I didn't have this problem when I was running my job on the SQL Server 2005 using Data Integrator. The strange thing is that, when I debug this job, it runs completely successfully!!
    Could you please help me to fix this problem?
    Thanks for your help in advance.

    Thanks for your reply.
    The ETL_Date is a datetime column and the global variable is date data type. I have to use the to_char() function to be able to get just the date part of the current system datetime. Earlier I had tried date_part function but it returns int, which didn't work for me.
    I found what the issue was. I don't know why there were some little squares next to the name of the global variable which I had mapped to the ETL_Date in the query object!!! The format and everything was OK, as I had the same mapping in other tables that had worked successfully.
    When I deleted the column in the query object and added it again, my problem solved.

Maybe you are looking for

  • How to Read the "text file and csv file" through powershell Scripts

    Hi All i need to add a multiple users in a particular Group through powershell Script how to read the text and CSV files in powershell am completly new to Powershell scripts any one pls respond ASAP.with step by step process pls Regards: Rajeshreddy.

  • My unit will not turn on. was working turned unit off and it will not turn back on.

    My I-pad touch was working. Turned unit off and charge unit and now it will not turn on.

  • Import image into iPhoto

    Hi, I'm trying to assign a folder action to a folder I drop random stuff into. I want to sort out images, and import them into iPhoto, then delete the file. I'm by no means a good programmer, I've tried to put this together with help from random scri

  • Using Declare to assign variables

    Hi All, Database:Pubs SQL Server 2012 Objective of Query: Creating a Store Procedure To Retrieve all authors by state.....but is not running properly.I 'm not sure whether my logic is sound or not. Error Code(s): Msg 156, Level 15, State 1, Procedure

  • Problems recovering pictures from icloud

    i had a white iphone 5 and i lost it after a while i got a black iphone 5, i got all my information back thanks to icloud all my pictures and phone numbers, but then after a while my new black iphone got stolen so i got an iphone 4 but when i was rec