Too many BPM data collection jobs on backend system

Hi all,
We find about 40,000 data collection jobs running on our ECC6 system, far too many.
We run about 12 solutions, all linked to the same backend ECC6 system. Most probably this is part of the problem. We plan to scale down to 1 solution rather than the country-based approach.
But here we are now, and I have these questions.
1. How can I relate a BPM_DATA_COLLECTION job on ECC6 back to a particular solution ? The job log give me monitor-id, but I can't relate that back to a solution.
2. If I deactivate a solution in the solution overview, does that immediately cancel the data collection for that solution ?
3. In the monitoring schedule on a business process step we sometimes have intervals defined as 5 minutes, sometimes 60. Strange thing is that the drop-down of that field does not always give us the same list of values. Even within a solution I see that in one step I have the choice of a long list of intervals, in the next step in that same business process I can only choose between blank and 5 minutes.
How is this defined ?
Thanks in advance,
Rad.

Hi,
How did you managed to get rid of this issue. i am facing the same.
Thanks,
Manan

Similar Messages

  • Utility data collection job Failure on SQL server 2008

    Hi,
    I am facing data collection job failure issue (Utility-data Collection) on SQL server 2008 server for, below is the error message as  :
    <service Name>. The step did not generate any output.  Process Exit Code 5.  The step failed.
    Job name is collection_set_5_noncached_collect_and_upload, as I gothrough the google issue related to premission issue but where exactly the access issues are coimng, this job is running on proxy account. Thanks in advance.

    Hi Srinivas,
    Based on your description, you encounter the error message after configuring data collection in SQL Server 2008. For further analysis, could you please help to collect detailed log information? You can check the job history to find the error log around the
    issue, as is mentioned in this
    article. Also please check Data Collector logs by right-clicking on Data Collection in the Management folder and selecting View Logs.
    In addition, as your post, the exit code 5 is normally a ‘Access is denied ’ code.  Thus please make sure that the proxy account has admin permissions on your system. And ensure that SQL Server service account has rights to access the cache folder.
    Thanks,
    Lydia Zhang

  • Data Collection job collection_set_2_upload failed

    We had a standalone server which was SQL server 2000 R2. I had configued the Data Collection job and uploded the data into our Management Data Warehouse.  It had worked fine.  One week ago,  the standalone server was re-built and
    then I found that all 4 jobs didn't work.
    collection_set_2_collection
    collection_set_2_upload
    collection_set_3_collection
    collection_set_3_upload
    I re-configured both Management Data Warehouse and Data Collection. Now the two collection_set_2_collection and collection_set_3_collection worked fine.  However the two collction_set_2_upload and collection_set_3_upload didn't work with an error "the
    step did not generate any output".  I cleaned up all DataCollectorCache files on the standalone server but the error stayed.
    Any idea?  Thank you for any kind of suggestion.
    Charlie He

    Hi Charlie,
    Based on my understanding, you configured Data Collection job. Then two upload jobs didn't work with an error "the step did not generate any output". And error also existed after cleaning up Data Collector cache files.
    New data cannot be uploaded to the Management Data Warehouse database when one or more Data Collector cache files are corrupted. The corruption may be caused for one of the following reasons:
    Data Collector encountered an exception.
    The disk runs out of free space while Data Collector is writing to a cache file.
    A firmware or a driver problem occurs.
    So I suggest you double check if Data Collection cache files are cleaned up completely. For more information, please refer to this kb:
    http://support.microsoft.com/kb/2019126.
    If issue persists exist, please check the agent job history to find the error log around the time issue caused. It would be better if you could provide detail error information for our further analysis. About how to check agent job history, please refer
    to this link:
    http://msdn.microsoft.com/en-us/library/ms181046(v=sql.110).aspx.
    Best regards,
    Qiuyun Yu

  • Data Collection Jobs fail to run

    Hello,
    I have installed three servers with SQL Server 2012 Standard Edition, which should be identically installed and configured.
    Part of the installation was setting up Data Collection which worked with out problems on two of the servers, but not on the last where 5 of the SQL Server Agent Jobs fails.
    The jobs failing are:
    collection_set_1_noncached_collect_and_upload
    Message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_2_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_collection
    Step 1 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    collection_set_3_upload
    Step 2 message:
    Executed as user: NT Service\SQLSERVERAGENT. The step did not generate any output.  Process Exit Code -1073741515.  The step failed.
    When I try to execute one of the jobs, I get the following event in the System Log:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Application Popup"
    Guid="{47BFA2B7-BD54-4FAC-B70B-29021084CA8F}" />
      <EventID>26</EventID>
      <Version>0</Version>
      <Level>4</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000000</Keywords>
      <TimeCreated
    SystemTime="2013-06-04T11:23:10.924129800Z" />
      <EventRecordID>29716</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="396" ThreadID="1336" />
      <Channel>System</Channel>
      <Computer>myServer</Computer>
      <Security
    UserID="S-1-5-18" />
      </System>
    - <EventData>
      <Data Name="Caption">DCEXEC.EXE - System Error</Data>
      <Data Name="Message">The program can't start because SqlTDiagN.dll
    is missing from your computer. Try reinstalling the program to fix this
    problem.</Data>
      </EventData>
     </Event>
    I have tried removing and reconfiguring the Data Collection two times using the stored procedure msdb.dbo.sp_syscollector_cleanup_collector and removing the underlying database.
    Both times with the same result.
    Below are the basic information about the setup of the server:
    Does any one have any suggestions about what the problem could be and what the cure might be?
    Best regards,
    Michael Andy Poulsen

    I tried running a repair on the SQL Server installation.
    This solved the problem.
    Only thing is that now I have to figure out the mening of these last lines in the log of the repair:
    "The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don’t require
    further customization for your applications:
    Service SID support has been enabled on the service.
    Service SID support has been enabled on the service."
    /Michael Andy Poulsen

  • Master data in SRM when backend system change

    Hi SRM Gurus,
    Our scenario is ECC5 as backend system integrated with SRM 4.0(  SSP scenario and  iwith Catalog CCM2.0) Tech Implementation scenario is Extended classic .
    Now our backend system will be changed to ECC6 from ECC 5.
    ECC6 is installed on separate server for implementing changes in FI  ( separate Dev, QA, Prod environment )
    Question is that when we integrate SRM4.0 with ECC6, all master data (product categories, products & vendor master) needs to be replicated from ECC6 to SRM4.0
    As SRM system will remain same what will happen to present master data in Production system.
    As deletion is not suggested what is the correct approach.
    We can delete this master data in SRM Dev and QA.
    Can we keep master data pertaining to ECC5 as it is in SRM.
    Any idea as to what type of impacts one can expect if backend system changed from ECC5 to ECC6? Is there a recommended approach for this?
    I already referred note SAP note 418886 /995771.
    Regards,
    Avinash

    Hi
    By changing to ECC6.0 you would be entering a new backend system. Since the master data in SRM is linked to the respective backend system, it should not impact you.The old master data would still be there but linked to old backend system. You can let the data remain as it is if you want.
    Regards
    Sanjeev

  • Many-to-One user mapping to backend systems - possible?

    My apologies if this simple question has been asked many times earlier - I couldn't find it if it is so.
    I understand I can use user-mapping in single signon to map user X in portal to user A in backend system (say SAP BI).
    Can I map users X, Y, Z..... to one backend user A?
    What alternative I have (to let a number of portal users run some BW reports on the backend system - for which we want to use only one BI user-id)?

    An easier more manageble way of doing this is to create a group and assign the group to a backend user. That will establish your many to one relationship.
    We perform this for our vendors currently as they would like to see a single shopping cart in SRM.
    Just be cautious on the manner in which you leverage your single user as you can experience several locked objects depending on how things are accessed. Another thing to consider is making the backend user a service user as this will also allow you to change the password mapping policies on the users as in many cases you will need to remap the users every N days depending on your systems security parameters.
    Hope this helps.

  • Too many Full garabge collection

    I know that this is a problematic issue, i will try to explain the situation the best i understand it.
    i'm doing a performance tests on Websphere with JDK 1.5. when i'm runnning load tests the Perm Gen is always in a grow. and i get a full gc every 1 min
    We run a perofiler on the application without any major findings. what is the best way to diagnostice the problem and if there is a way to confirm that this is an application problem or a configuration problem ?
    Here is details about the system :
    I'm running my application with the following parameters:
    verboseModeGarbageCollection="false" verboseModeJNI="false" initialHeapSize="2560" maximumHeapSize="2560" runHProf="false" debugMode="false" debugArgs="" genericJvmArguments="-Djavax.management.builder.initial= -Dcom.sun.management.jmxremote -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+AggressiveHeap -XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:NewSize=256m -XX:MaxNewSize=512m"
    This is the gc output in one cycle of full gc:
    [Full GC [PSYoungGen: 83659K->0K(349568K)] [PSOldGen: 2025999K->525203K(2097152K)] 2109659K->525203K(2446720K) [PSPermGen: 92303K->92303K(262144K)], 5.3172163 secs]
    [GC [PSYoungGen: 174848K->80229K(349568K)] 700051K->605432K(2446720K), 0.0264213 secs]
    [GC [PSYoungGen: 255077K->57913K(349568K)] 780280K->583116K(2446720K), 0.0245804 secs]
    [GC [PSYoungGen: 233145K->67890K(341760K)] 758348K->593093K(2438912K), 0.0235355 secs]
    [GC [PSYoungGen: 247346K->52121K(347456K)] 772549K->577324K(2444608K), 0.0205748 secs]
    [GC [PSYoungGen: 235033K->50073K(351104K)] 760236K->575276K(2448256K), 0.0264315 secs]
    [GC [PSYoungGen: 243801K->117165K(353792K)] 769004K->642368K(2450944K), 0.0386039 secs]
    [GC [PSYoungGen: 312749K->164773K(339968K)] 837952K->698063K(2437120K), 0.0523577 secs]
    [GC [PSYoungGen: 339621K->174332K(349568K)] 872911K->762494K(2446720K), 0.0435739 secs]
    [GC [PSYoungGen: 349180K->174434K(349568K)] 937342K->784479K(2446720K), 0.0408629 secs]
    [GC [PSYoungGen: 349282K->174047K(349568K)] 959327K->856683K(2446720K), 0.0450009 secs]
    [GC [PSYoungGen: 348895K->129687K(349568K)] 1031531K->894849K(2446720K), 0.0433891 secs]
    [GC [PSYoungGen: 304535K->45794K(349568K)] 1069697K->916887K(2446720K), 0.0393811 secs]
    [GC [PSYoungGen: 220642K->45056K(349568K)] 1091735K->918093K(2446720K), 0.0305810 secs]
    [GC [PSYoungGen: 218723K->77720K(349568K)] 1091761K->953079K(2446720K), 0.0315985 secs]
    [GC [PSYoungGen: 252565K->79205K(349568K)] 1127924K->965799K(2446720K), 0.0353033 secs]
    [GC [PSYoungGen: 254053K->48025K(349568K)] 1140647K->972788K(2446720K), 0.0321813 secs]
    [GC [PSYoungGen: 222873K->174332K(349568K)] 1147636K->1137561K(2446720K), 0.0434059 secs]
    [GC [PSYoungGen: 349180K->80690K(349568K)] 1312409K->1163076K(2446720K), 0.0387261 secs]
    [GC [PSYoungGen: 255538K->74290K(349568K)] 1337924K->1197422K(2446720K), 0.0504126 secs]
    [GC [PSYoungGen: 249138K->49487K(349568K)] 1372270K->1238304K(2446720K), 0.0372762 secs]
    [GC [PSYoungGen: 224092K->45794K(349568K)] 1412909K->1239300K(2446720K), 0.0303181 secs]
    [GC [PSYoungGen: 220642K->46080K(349568K)] 1414148K->1245756K(2446720K), 0.0331153 secs]
    [GC [PSYoungGen: 220716K->71320K(349568K)] 1420392K->1273683K(2446720K), 0.0373531 secs]
    [GC [PSYoungGen: 246168K->80690K(349568K)] 1448531K->1288785K(2446720K), 0.0474151 secs]
    [GC [PSYoungGen: 255538K->123748K(349568K)] 1463633K->1373405K(2446720K), 0.0392413 secs]
    [GC [PSYoungGen: 298596K->107415K(349568K)] 1548253K->1424226K(2446720K), 0.0426573 secs]
    [GC [PSYoungGen: 282263K->99531K(349568K)] 1599074K->1524829K(2446720K), 0.0455397 secs]
    [GC [PSYoungGen: 274379K->125766K(349568K)] 1699677K->1597998K(2446720K), 0.0402446 secs]
    [GC [PSYoungGen: 300614K->46080K(349568K)] 1772846K->1592451K(2446720K), 0.0365917 secs]
    [GC [PSYoungGen: 220928K->45056K(349568K)] 1767299K->1596906K(2446720K), 0.0314118 secs]
    [GC [PSYoungGen: 219851K->48749K(349568K)] 1771702K->1604723K(2446720K), 0.0332472 secs]
    [GC [PSYoungGen: 223597K->48581K(349568K)] 1779571K->1608827K(2446720K), 0.0358806 secs]
    [GC [PSYoungGen: 223429K->45056K(349568K)] 1783675K->1607545K(2446720K), 0.0336534 secs]
    [GC [PSYoungGen: 218518K->77720K(252608K)] 1781007K->1646087K(2349760K), 0.0403420 secs]
    [GC [PSYoungGen: 252568K->146019K(337280K)] 1820935K->1719290K(2434432K), 0.0427343 secs]
    [GC [PSYoungGen: 316579K->112694K(345280K)] 1889850K->1734228K(2442432K), 0.0407705 secs]
    [GC [PSYoungGen: 283254K->117302K(349568K)] 1904788K->1778546K(2446720K), 0.0445106 secs]
    [GC [PSYoungGen: 292150K->86629K(349568K)] 1953394K->1789210K(2446720K), 0.0426963 secs]
    [GC [PSYoungGen: 261477K->71320K(345472K)] 1964058K->1850372K(2442624K), 0.0404092 secs]
    [GC [PSYoungGen: 245571K->163600K(348928K)] 2024622K->1956942K(2446080K), 0.0451309 secs]
    [GC [PSYoungGen: 341264K->83659K(349568K)] 2134606K->1949767K(2446720K), 0.0412192 secs]
    [GC [PSYoungGen: 258507K->45056K(349568K)] 2124615K->1978267K(2446720K), 0.0896359 secs]
    [GC [PSYoungGen: 219904K->80690K(349568K)] 2153115K->2022121K(2446720K), 0.0360234 secs]
    [GC [PSYoungGen: 254827K->78744K(253824K)] 2196259K->2062178K(2350976K), 0.0394481 secs]
    [GC [PSYoungGen: 253784K->102961K(346688K)] 2237218K->2123243K(2443840K), 0.0386621 secs]
    [Full GC [PSYoungGen: 102961K->0K(346688K)] [PSOldGen: 2020282K->529529K(2097152K)] 2123243K->529529K(2443840K) [PSPermGen: 92303K->92303K(262144K)], 5.3534384 secs]

    First: What do you think -XX:+AggressiveHeap does, and why are you using it. That's a "jumbo" flag and sets a whole bunch of individual flags, some of which might be things you want, but some are almost certainly not what you want. Find out what it does (which varies from release to release), and figure out which of the individual flags you want, if any.
    To be more constructive: I don't see anything wrong with your application. Every second or so you allocate about 128MB, and every second or so you promote about 25MB. Then a full collection happens and knocks your heap down from 2123MB to 529MB (roughly). The only problem I see is that a 5 second GC pause may not be acceptable. (Well, 20% GC overhead is high, too, so you aren't getting the throughput you want.)
    (I only have two samples, but if the trend from 525203K to 529529K in heap size after the full collections continues, you will eventually fill the heap with live objects and get an OutOfMemoryError. That might be a "leak" in your application. But I'm loathe to extrapolate from just two samples. Something to watch.)
    If the full collection pauses are a problem, you might want to switch to the low-pause collector (-XX:+UseConcMarkSweepGC) and embark on a tuning exercise for that. If you have spare CPU cycles, then the low-pause collections might be "free". Or if you are running on a multi-processor, you might try the parallel old-space collector (-XX:+UseParallelOldGC), since old-space collections usually scale pretty well. You are already using the parallel young space collector (-XX:+UseParallelGC).
    (You say you are using -XX:+PrintGCTimeStamps, but I don't see the time stamps in your log snippet.)
    Another choice is to keep things from getting promoted into the old generation, if possible, as a way of avoiding the full collections. That will depend on the life cycle of your application data. What I see is that you promote data into the old generation, and then most of it is dead at the next full collection. If that data would die if kept in the young generation, then increasing the size of the young generation (or at least the "survivor" spaces in the young generation), then you might be able to keep that stuff out of the old generation, and avoid having to collect the old generation. That's the theory. You might want to think about inverting the sizes of the generations: e.g., in a 2.5GB heap, have 1.5GB of young generation and 1GB of old generation, since you seem to only have about 500MB of persistent data (from the two samples I can see!).
    You might want to contact Sun support engineering: they are good at tuning the various collectors to meet customer needs.

  • Too many rows in collection cose Siena rapidly slowing down

    Is this a know issue? If I add more than 1000 rows in collection so the siena start to rapidly slowing down. I don't know If I will have this problem in published app or this is just a sienna issue? Thanks for answers

    I believe for example in the Excel Microsoft has currently set the limit on 15000 records.
    Have you checked if it's faster if you use loadData SaveDate and create the collection from the file in the appspace. That may speed things up too (maybe). My post in
    http://social.technet.microsoft.com/Forums/en-US/1226fb54-bfb4-4eaa-8a7f-7bd0a270ea4d/using-loaddata-and-savedata-yet-again?forum=projectsiena#c5018405-f066-4422-bebb-a2d8acb4eb7c may give you some pointers if you need them.
    You can also use the function explained in
    http://social.technet.microsoft.com/Forums/en-US/3920f6b9-db62-4530-b891-4f9c1ac4832d/record-filtering-wiithin-gallery?forum=projectsiena to move in blocks of a few 100 per click if you want.
    Regards
    StonyArc

  • Mapping too many fields can cause load on the system?

    Hi Experts,
    I have a question about a mapping .. I am mapping an IDOC to a web-service. The Web-service structure is having the exact same structure as the idoc, but all the fields are not necessary. But the Owner of the web-service is requesting me to map all fields even though it is not required. So my question is ,Will this unwanted field mappings can create an overload/burden on xi system ? I am getting the idoc from an R/3 system  and then I use a BPM to send it to web-service to send the invoice to ORACLE ERP and the response I am getting is used to update the status of IDOC in R/3. Your thoughts on this please..
    Thanks

    Hi,
    normally a standard mapping (I guess graphical) which is 1:1 will definetely not cause any performace issues! Sometimes negative effects are experienced with very complex mappings and many transformations, calculations, value-based actions, etc. Even XSLTs and large mssages could be a problem, because each time the whole message is read. In such cases JAVA mappings may help.
    But your case all should be fine ;o)
    Regards,
    Kai

  • System I/O and Too Many Archive Logs

    Hi all,
    This is frustrating me. Our production database began to produce too many archived redo logs instantly --again. This happened before; two months ago our database was producing too many archive logs; just then we began get async i/o errors, we consulted a DBA and he restarted the database server telling us that it was caused by the system(???).
    But after this restart the amount of archive logs decreased drastically. I was deleting the logs by hand(350 gb DB 300 gb arch area) and after this the archive logs never exceeded 10% of the 300gb archive area. Right now the logs are increasing 1%(3 GB) per 7-8 mins which is too many.
    I checked from Enterprise Manager, System I/O graph is continous and the details show processes like ARC0, ARC1, LGWR(log file sequential read, db file parallel write are the most active ones) . Also Phsycal Reads are very inconsistent and can exceed 30000 KB at times. Undo tablespace is full nearly all of the time causing ORA-01555.
    The above symptoms have all began today. The database is closed at 3:00 am to take offline backup and opened at 6:00 am everyday.
    Nothing has changed on the database(9.2.0.8), applications(11.5.10.2) or OS(AIX 5.3).
    What is the reason of this most senseless behaviour? Please help me.
    Thanks in advance.
    Regards.
    Burak

    Selam Burak,
    High number of archive logs are being created because you may have massive redo creation on your database. Do you have an application that updates, deletes or inserts into any kind of table?
    What is written in the alert.log file?
    Do you have the undo tablespace with the guarentee retention option btw?
    Have you ever checked the log file switch sequency map?
    Please use below SQL to detirme the switch frequency;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"+
    FROM V$LOG_HISTORY
    WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    +) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC+
    +) WHERE ROWNUM < 8+
    Ogan

  • Not able to do user Mapping with the backend System

    Hello all,
    I am trying to Map a user to a windows based KM System but getting an error like
    "There is a configuration issue which leads to problems when accessing user mapping data for the selected backend system 'KM_Test_System'. Usually, the reason is user mapping being configured for strong encryption, but the necessary additional files being missing. Check the security log file for further information and hints on how to solve the problem."
    Can somebody tell me what could be the possible reason for this and how to handle this. I dont have access to the Portal server right now , so not able to see the security log.
    Thanks to all,
    Regards,
    Sarabjeet Singh.

    Hi Sarbjeet,
    I believe this links will help you understand and solve your problem:
    user mapping is not saved
    and
    http://help.sap.com/saphelp_nw04/helpdata/en/04/d246215f1d4f588d1d9c49391acb01/frameset.htm
    Hope this helps,
    Robert

  • Could not connect to backend system through portal via Web Dispatcher

    Web dispatcher redirecting problem
    Dear Experts,
    I have implemented a scenario which comprised of customizing that is developed in Portal EP7. What customizing does is getting report from backend system (ERP 6.0)
    According to the needs for reaching the portal from internet I configured the SAP Web Dispatcher in the environment as you may see its contents below.
    The problem is when I run the customizing in portal system then it requires connecting to backend system (ECC 6.0) to get the data (report). But from this point on web browser comes to a blank page, could not view the data that is from backend system via portal. Because in the time when it tries to retrieve data in backend system, on the left bottom of the explorer the indicator shows internet address that system uses connecting to the backend system as local network address of the backend system which is not known in internet therefore I get blank page.
    The question is how to configure the web dispatcher in a way that both portal and backend systems could be reachable from internet?
    Contents of profile file of Web Dispatcher as;
    SAPSYSTEMNAME = WDP
    SAPGLOBALHOST = portald
    SAPSYSTEM = 02
    INSTANCE_NAME = W02
    DIR_CT_RUN = $(DIR_EXE_ROOT)\$(OS_UNICODE)\NTAMD64
    DIR_EXECUTABLE = $(DIR_CT_RUN)
    Accesssability of Message Server
    rdisp/mshost = portald
    ms/http_port = 8101
    Configuration for medium scenario
    icm/max_conn = 500
    icm/max_sockets = 1024
    icm/req_queue_len = 500
    icm/min_threads = 10
    icm/max_threads = 50
    mpi/total_size_MB = 80
    SAP Web Dispatcher Ports
    #icm/server_port_0 = PROT=HTTP,PORT=81$$
    SAP Web Dispatcher Ports
    icm/server_port_0 = PROT=HTTP,PORT=60000
    icm/HTTP/redirect_0 = PREFIX=/, TO=/irj/index.html
    icm/HTTP/redirect_0 = PORT=50000
    #maximum number of concurrent connections to one server
    wdisp/HTTP/max_pooled_con = 500
    wdisp/HTTPS/max_pooled_con = 500
    Configuration for medium scenario
    icm/max_conn = 500
    icm/max_sockets = 1024
    icm/req_queue_len = 500
    icm/min_threads = 10
    icm/max_threads = 50
    mpi/total_size_MB = 80
    Regards,
    Ali Taner

    Hi,
    To resovle this you must have registered FQDN for your backend system as well. When you call the report from Portal using internet it should call that FQDN of your backend system then DNS will resolve this & you will get the expected page. This way only you can resolve this issue.
    Thanks,
    Sachin Sable

  • Could not parse the file contents as a data set. There were too many variable names in the first line of the text file.

    Could not parse the file contents as a data set. There were too many variable names in the first line of the text file.

    What are the Variables settings, what is the text file’s content, …?

  • Please organize osx launchpad icons in CS6 collection(there are too many adobe icons)

    Adobe applications create too many icons in launchpad on osx. Is there a way to automate organising them in a more elegant fashion? Launchpad gets full of red adobe icons(2 pages of them) especially if you have master collection installed. Please put them all in one folder or something.

    Thanks for your input here. We will look into it to see if there is an improvement we can make in the future.
    Pattie

  • Overloading a DATE function with TIMESTAMP to avoid "too many declarations"

    CREATE OR REPLACE PACKAGE util
    AS
      FUNCTION yn (bool IN BOOLEAN)
        RETURN CHAR;
      FUNCTION is_same(a varchar2, b varchar2)
        RETURN BOOLEAN;
      FUNCTION is_same(a date, b date)
        RETURN BOOLEAN;
      /* Oracle's documentation says that you cannot overload subprograms
       * that have the same type family for the arguments.  But,
       * apparently timestamp and date are in different type families,
       * even though Oracle's documentation says they are in the same one.
       * If we don't create a specific overloaded function for timestamp,
       * and for timestamp with time zone, we get "too many declarations
       * of is_same match" when we try to call is_same for timestamps.
      FUNCTION is_same(a timestamp, b timestamp)
        RETURN BOOLEAN;
      FUNCTION is_same(a timestamp with time zone, b timestamp with time zone)
        RETURN BOOLEAN;
      /* These two do indeed cause problems, although there are no errors when we compile the package.  Why no errors here? */
      FUNCTION is_same(a integer, b integer) return boolean;
      FUNCTION is_same(a real, b real) return boolean;
    END util;
    CREATE OR REPLACE PACKAGE BODY util
    AS
         NAME: yn
         PURPOSE: pass in a boolean, get back a Y or N
      FUNCTION yn (bool IN BOOLEAN)
        RETURN CHAR
      IS
      BEGIN
        IF bool
        THEN
          RETURN 'Y';
        END IF;
        RETURN 'N';
      END yn;
         NAME: is_same
         PURPOSE: pass in two values, get back a boolean indicating whether they are
                  the same.  Two nulls = true with this function.
      FUNCTION is_same(a in varchar2, b in varchar2)
        RETURN BOOLEAN
      IS
        bool boolean := false;
      BEGIN
        IF a IS NULL and b IS NULL THEN bool := true;
        -- explicitly set this to false if exactly one arg is null
        ELSIF a is NULL or b IS NULL then bool := false;
        ELSE bool := a = b;
        END IF;
        RETURN bool;
      END is_same;
      FUNCTION is_same(a in date, b in date)
        RETURN BOOLEAN
      IS
        bool boolean := false;
      BEGIN
        IF a IS NULL and b IS NULL THEN bool := true;
        -- explicitly set this to false if exactly one arg is null
        ELSIF a is NULL or b IS NULL then bool := false;
        ELSE bool := a = b;
        END IF;
        RETURN bool;
      END is_same;
      FUNCTION is_same(a in timestamp, b in timestamp)
        RETURN BOOLEAN
      IS
        bool boolean := false;
      BEGIN
        IF a IS NULL and b IS NULL THEN bool := true;
        -- explicitly set this to false if exactly one arg is null
        ELSIF a is NULL or b IS NULL then bool := false;
        ELSE bool := a = b;
        END IF;
        RETURN bool;
      END is_same;
      FUNCTION is_same(a in timestamp with time zone, b in timestamp with time zone)
        RETURN BOOLEAN
      IS
        bool boolean := false;
      BEGIN
        IF a IS NULL and b IS NULL THEN bool := true;
        -- explicitly set this to false if exactly one arg is null
        ELSIF a is NULL or b IS NULL then bool := false;
        ELSE bool := a = b;
        END IF;
        RETURN bool;
      END is_same;
      /* Don't bother to fully implement these two, as they'll just cause errors at run time anyway */
      FUNCTION is_same(a integer, b integer) return boolean is begin return false; end;
      FUNCTION is_same(a real, b real) return boolean is begin return false; end;
    END util;
    declare
    d1 date := timestamp '2011-02-15 13:14:15';
    d2 date;
    t timestamp := timestamp '2011-02-15 13:14:15';
    t2 timestamp;
    a varchar2(10);
    n real := 1;
    n2 real;
    begin
    dbms_output.put_line('dates');
    dbms_output.put_line(util.yn(util.is_same(d2,d2) ));
    dbms_output.put_line(util.yn(util.is_same(d1,d2) ));
    dbms_output.put_line('timestamps'); -- why don't these throw exception?
    dbms_output.put_line(util.yn(util.is_same(t2,t2) ));
    dbms_output.put_line(util.yn(util.is_same(t,t2) ));
    dbms_output.put_line('varchars');
    dbms_output.put_line(util.yn(util.is_same(a,a)));
    dbms_output.put_line(util.yn(util.is_same(a,'a')));
    dbms_output.put_line('numbers');
    -- dbms_output.put_line(util.yn(util.is_same(n,n2))); -- this would throw an exception
    end;
    /Originally, I had just the one function with VARCHAR2 arguments. This failed to work properly because when dates were passed in, the automatic conversion to VARCHAR2 was dropping the timestamp. So, I added a 2nd function with DATE arguments. Then I started getting "too many declarations of is_same exist" error when passing TIMESTAMPs. This made no sense to me, so even though Oracle's documentation says you cannot do it, I created a 3rd version of the function, to handle TIMESTAMPS explicitly. Surprisingly, it works fine. But then I noticed it didn't work with TIMESTAMP with TIME ZONEs. Hence, the fourth version of the function. Oracle's docs say that if your arguments are of the same type family, you cannot create an overloaded function, but as the example above shows, this is very wrong.
    Lastly, just for grins, I created the two number functions, one with NUMBER, the other with REAL, and even these are allowed - they compile. But then at run time, it fails. I'm really confused.
    Here is the apparently incorrect Oracle documentation on the matter: http://docs.oracle.com/cd/B12037_01/appdev.101/b10807/08_subs.htm (see overloading subprogram names), and here are the various types and their families: http://docs.oracle.com/cd/E11882_01/appdev.112/e17126/predefined.htm.
    Edited by: hotwater on Jan 9, 2013 3:38 PM
    Edited by: hotwater on Jan 9, 2013 3:46 PM

    >
    So, I added a 2nd function with DATE arguments. Then I started getting "too many declarations of is_same exist" error when passing TIMESTAMPs. This made no sense to me
    >
    That is because when you pass a TIMESTAMP Oracle cannot determine whether to implicitly convert it to VARCHAR2 and use your first function or implicitly convert it to DATE and use your second function. Hence the 'too many declarations' exist error.
    >
    , so even though Oracle's documentation says you cannot do it, I created a 3rd version of the function, to handle TIMESTAMPS explicitly. Surprisingly, it works fine. But then I noticed it didn't work with TIMESTAMP with TIME ZONEs.
    >
    Possibly because of another 'too many declarations' error? Because now there would be THREE possible implicit conversions that could be done.
    >
    Hence, the fourth version of the function. Oracle's docs say that if your arguments are of the same type family, you cannot create an overloaded function, but as the example above shows, this is very wrong.
    >
    I think the documentation, for the 'date' family, is wrong as you suggest. For INTEGER and REAL the issue is that those are ANSI data types and are really the same Oracle datatype; they are more like 'aliases' than different datatypes.
    See the SQL Language doc
    >
    ANSI, DB2, and SQL/DS Datatypes
    SQL statements that create tables and clusters can also use ANSI datatypes and datatypes from the IBM products SQL/DS and DB2. Oracle recognizes the ANSI or IBM datatype name that differs from the Oracle Database datatype name. It converts the datatype to the equivalent Oracle datatype, records the Oracle datatype as the name of the column datatype, and stores the column data in the Oracle datatype based on the conversions shown in the tables that follow.
    INTEGER
    INT
    SMALLINT
    NUMBER(38)
    FLOAT (Note b)
    DOUBLE PRECISION (Note c)
    REAL (Note d)
    FLOAT(126)
    FLOAT(126)
    FLOAT(63)

Maybe you are looking for

  • Error while installing the WAS 640 SR1 unicode

    Hi,   When I am trying to install WAS 640 SR1 On windows xp sp2. The following error I am getting. Can you help to fix this issue. I am attaching the trace file. SAPinst is getting started. Please be patient ... guiengine: login in process. INFO     

  • Dynamically change table selecting from

    in 3.2 on 11g database is there a way to dynamically change the table used in a select statement for an interactive report? I have a series of similar tables, one for each month of the year. I want to click a month on one page and bring up a report f

  • PDF Export on HTML5 Certificate Widgets

    Anyone know where I should start on this? I'd like to have users simply save certificate as a PDF file.  I'm using the new html5 interactions for the certificate to display quiz results variables on the certificate. I found the Dynamic PDF Export Wid

  • Changes to printed doc layout not being saved (Robohelp 8)

    Whenever I make a change to the TOC in my printed document layout, (i.e., deleting a page or reorganizing the pages), the change is not saved.  I have to completely rebuild that chapter.  This worked fine in version 6, but is a consistent problem in

  • How to see the underlying program for Infoset  query in ECC?

    HI all , we have generic datasource based on Infoset . now ii need to add 2 fields from batch characteristics . my question is how to go & edit the underlying abap program . i have added the fields to the extract structure , but not populated the 2 f