DI job causing high levels of I/O on database server

We have a DI job that is loading a sql server 2005 database.  When the facts are loaded itu2019s causingu2019s a high level of I/O on the database server causing the DI job to slow down.  No more than 5 facts are loaded concurrently.  The fact dataflows all have a sql transform to run the select query against the DB, a few query transforms to do lookups to get dimension keys, and all do inserts to the target. The DBA says there are too many DB connections open and DI is not closing them.  My thinking was DI would manage the open connections for lookup, etc and would close then properly when the dataflow is complete. 
Any thoughts on what else would cause high levels of DB I/O?
Additional Info:
- Run the DI job, source and target tables are in SQL Server, and it takes 5 hours.
- Run the same DI job again, on the same data set, and it takes 12+ hours.  This run will have high levels on DB I/O.
- But if SQL Server is stopped and restart, the job will again take 5 hours the first time it runs.
Edited by: Chris Sam on Apr 15, 2009 3:43 PM

There are a lot of areas of a DI Job that can be tuned for performance, but given the fact that your job runs fine after the database is restarted, it sounds like a problem with the database server and not the Data Integrator job.
There are a lot of resources out there for dealing with SQL Server disk IO bottlenecks.  As a minimum first step all of them will recommend putting your .mdf and .ldf files on seperate drives and using Raid 10 for the .mdf file.

Similar Messages

  • Can a long running batch job causing deadlock bring server performance down

    Hi
    I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking >12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
    The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
    Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
    Thanks
    Rgds
    Ung

    Kok Aik wrote:
    According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
    I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
    Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
    You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
    Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
    And so on...
    Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
    Thanks
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Need Help to identify the server hardware high level details

    Hi All,
    I have browsed  and i couldn't find the high level hard ware details.
    Can any one help me to take the below hardware HIGH level details of the sun solaris server machine.
    1.CPU processor name.
    2.CPU processor speed.
    3.Total RAM size
    4.Overall hard disk space.
    Thanks in Advance,
    Senthur

    Hello chandandascyr07. I don't think Linksys will be able to provide you the GPIO data since it's basically a proprietary company  information.

  • High-Level LabView Control System Engineer Job Opening - Seattle

    High-Level LabView Control System Engineer Job Opening - Seattle
    Seattle Safety is looking for a qualified individual to fill an opening for Senior Software Controls Engineer.  Seattle Safety designs, manufactures, and installs advanced crash test sled systems that are used in automotive and aeronautical industries.  The duties of the Controls Engineer include:
    Maintain existing control software code base, written in LabVIEW (including the Real-Time Module).
    Improve existing software based on requirements and requests from customers and colleagues.
    Troubleshoot and repair any functional software bugs that may arise.
    Continuously investigate opportunities for system improvement through new or alternative hardware or software approaches.
    Support installations of crash test equipment at on-site locations worldwide.
    Provide technical support for team members locally and abroad in subject matters concerning performance, installation, and maintenance of software and data acquisition hardware.
    Maintain professional relationships with suppliers and vendors in order to keep up with industry developments.
    Furthermore, the ideal candidate would possess the following skills:
    Intermediate-to-advanced knowledge of LabVIEW.
    Ability to analyze empirical data against theoretical predictions to enhance and improve mathematical model of system.
    Familiarity with data acquisition concepts and hardware.
    Ability to troubleshoot electrical and electronic systems at the module and equipment level.
    Discipline and organization with respect to software maintenance and version management.  Experience with source configuration management tools a plus (CVS, ClearCase, Perforce, etc.)
    Experience with sophisticated high-speed feedback control systems
    General skills in areas such as frequency domain analysis, systems analysis, digital filtering, and both linear and non-linear signal processing.
    BSEE, BSME or BS Physics may be a good fit, but not limited to these areas.
    Ability to work both alone and with colleagues to solve problems and to weigh the merits of differing approaches.
    Pay is commensurate with skills and qualifications of the applicant.
    Contact:
    Seattle Safety
    Tom Wittmann
    (253)395-4321
    1222 6th Av N
    Kent, WA  98032
    [email protected]
    Attachments:
    ServoSled Brochure.pdf ‏1215 KB

    Dear Sir / Madam,
    I am an experienced engineering professional skilled in Post Silicon Validation by Automating using LabVIEW, Power Measurements, Jitter Measurement & Analysis, Audio Characterizations, Silicon Validation Test Cases, Multi-Channel Data acquisition and Triggering using NI DAQ Cards, Control Systems, Serial Communications using VISA and Serial I/O Interface, Code Native Interfaces, Call Library functions to interface with third party and custom dlls, ATMEL and PIC Micro Controller programming, Temperature Controllers like Honeywell, ESPEC-641, TestEquity 115, Hand held terminal programming to drive servo motors, C/C++/VB Programming for developing embedded applications.
    Good experience on Windows API, protocol implementations, ARM11 & ARM7TDMI on-chip programming using Register Map and Pinout Specs using C/C++ Metrowerks Code Warrior and MULTI-ICE for ARM Debugger.
    Looking for L1/H1 Job
    Thank you for your time and consideration.
    Please find an attachment of my resume in MS-Word format.
    Sincerely yours,
    K.Sowjanya. B.Tech
    Message Edited by Support on 04-04-2008 08:39 AM

  • How to Schedule SAP background job at OS Level

    Hi All,
    Can Anyone tell me how to Schedule SAP background job at OS Level (unix).
    Regards,
    Anil

    Hi Anil,
    I donu2019t know your requirements, anyway itu2019s possible to setup your SAP job in order to start after an event, and after that you can get the event triggered from the Operating System in the following way:
    - log into you Operating System with the SIDadm user id (at the Operating System level) and go to directory /usr/sap/SID/SYS/exe/run
    - Run the SAPEVT executable as follows:
    sapevt YOUR_EVENT -t pf=/usr/sap/SID/SYS/profile/DEV_DVEBMGS00_server001 nr=01
    This will raise the event, and cause the job scheduled within SAP to execute.
    You can periodically execute this job with crontab.
    Thanks,
    Federico Biavati

  • Where can I find various high level examples of workflows being used

    I am about to start a project with TCS 3.5 and have been participating in the Adobe webinars to help learn components and specific techniques but what I am lacking is an understanding of various workflows I can model my project after or take bits from various sources. Why start with Framemaker in this workflow versus RoboHelp or even Word? Questions like this I think come from experience with the process and I am thinking that what I am getting myself into is a chessgame with all these pieces and don't want to paint myself into a corner by traveling down one route. I have seen this graphic:
    And this one:
    And this one:
    But they are too generic and do not contain enough information to really understand the descision making process one must go through on various projects.
    Can we have a series of webinars made, all with the underlining theme of defining a working process or workflow, by having guests describe how they have or are using this suite in real life on their own projects? One that might include a graphic showing the routes taken through the suite with reasons why?
    My project hopes to make a single source internal site that will tie together various 3D portable industrial coordinate metrology systems (hardware and software). It would be used as a dispersal site for help, communications between users and SME, OEM information, QA requirements, established processes, scripting snipet downloads, statistics, and training (including SOJT). Portable industrial metrology has 8 different softwares that are used and right now about 8 different instruments. These include laser trackers and radars, articulated arms, scanners, structered white and blue light to name a few. The softwares include Spatial Analyzer, Veriserf, CompIT, eMscon, AXYZ to a few there as well. I want to be able to participate and add content to an internal Sharpoint site, push content to users for stand-alone workstations, ePub, capture knowledge leaving the company through attrition, develop easy graphic rich job aid sheets, and aid in evaluations of emergent software and hardware. I would also like to leave the option open to use the finished product as a rosetta stone like translator between the software packages; doing this is the equivelent of doing this in these other software pacages for example.

    PDF is definately a format I want to include, to collaborate with other divisions and SME for one reason, but also for the ease in including 3D interactive target models with in it and portability. I plan on being able to provide individual PDFs that are very specific in their topics and to also use them to disperse user guides, cheat sheets or job aids... something the user may want to laminate on their own and keep with them for reference, printed out. Discussion in these sheets would be drasticly reduced to only the elements, relying heavely on bullet points or steps, usfull graphs, charts and tables... and of course illustrative images. I am thinking that these should be downloadable buttons to print on each topic section, not in a general apendix or such. They would hopefully be limited to one page, double sided 8x10.
    The cheet sheet would have a simplistic flow chart of how or where this specific topic fits in the bigger picture,
    The basic steps,
    Illustrations, equipment, setup
    Software settings for various situations in a table or chart,
    Typical result graph to judge with,
    Applicable QA, FAA regulation settings or concerns,
    Troubleshooting table,
    Topic SME contact info
    On the back, a screen shot infographic of software process
    The trouble here is that I have read that FM has a problem sometimes in succesfully transfering highly structured or formatted material to RoboHelp. Does this then mean that I would take it from FM straight to PDF?
    Our OEM material is very high level stuff... basicly for engineers and not shop floor users... but that is not to say they don't have some good material that could be useful. Our internal content is spread out across many different divisions and continents, with various ways of saying the same thing. This leads QA to interpret the information differently depending where the systems are put to work. We also have FAA requirements that need to be addressed and reminded to the user.
    Our company is starting to also see an exodus of the most knowledagble of the users through retirement. Capturing the knowledge and soft skill packages they have developed working here for 20-30 years is something I am really struggling with. I have only come up with two ideas so far:
    Internal User Web based Forum
    Interviews (some SMEs do not want to make the effort in transfering knowledge by participating in anything if it requires an effort they don't see of benefit to themseleves), to get video, audio or transcription records

  • Phase out settings at a higher level such as brand or major customer

    Have any of you ever setup phase out assignments at a higher level than product and had it work correctly?  For example, we want to phase out a brand for a major customer.  In other words, a customer is dropping a brand and we don't want statistical forecast generated for that brand customer combination any longer.  I am able to setup the fields in phase out lifecycle settings for product, brand, major account but when i enter the brand and major account I am still getting forecast generated.  It appears to stop for some products within the brand but not all.  Another example is if a customer quits ordering from us I want to setup the major customer to phase out so no forecast is generated.
    If you have done this successfully please let me know.  Or if you would handle these situations in a different manner other than phase out please let me know.  We can do historical adjustments each period but that is a lot of maintenance to do after each period before statistical forecast is generated.
    Thanks
    Steve

    Hi Stephen,
    Life cycle planning works only at the detail level (each CVC), the option of aggregate planning is helpful if you want to phase in or out a certain CVC when you are forecasting at the aggregate level.
    One option is that you should have all products in the "profile assignment for life cycle " section which falls under that brand and customer. you can maintain a file and then automate the upload process in to the "assignment"
    or
    you can try to use the copy functionality in the realignment (/SAPAPO/RLGCOPY) , where you maintain copy factor as NIL and when the stat fcst is generated, you can use this as the next step to zero out -but you would need to maintain them manually.
    or
    easiest and safest wayy would be create a selection for those combination and do not include them in the planning job for stat fcst.
    or
    you can build a customised program to access the PA, PB, Data view and input the selection to zero out the stat fcst KF for that particular selection after the stat fcst run. here you would need to check if the diaggregated values are good enough.
    hope it helps.

  • Errors in the high-level relational engine on Schedule Refresh Correlation ID: 7b159044-c719-41f9-8d0f-da6f73576d6e

    Connections are all valid and work when I setup the Refresh but when the schedule refresh occurs I get this error:
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The remote server returned
    an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote server returned an error:
    (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed.
    It is trying to refresh 3 simple tables with less than 9,000 rows each.
    Also, i'd like to add that the refresh works right from excel as well...
    Another fact just in, it seems to work on one out of 3 tables sometimes, so first table gets a success on the log, but sometimes it fails (It succeed twice and failed once with the above error).  The second table never succeeds and gets the error above. 
    The 3rd table never even gets attempted.
    Am I running into some sort of timeout perhaps?
    loading
    Failure
    Correlation ID:
    7b159044-c719-41f9-8d0f-da6f73576d6e
    04/01/2015
    at 01:50 AM
    04/01/2015
    at 01:53 AM
     00:03:14
      Power Query - Sendout_Records Not tried
      Power Query - Positions Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The
    remote server returned an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote
    server returned an error: (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed. 
      Power Query - Position_Activities Success.

    This is not because of the number of rows, instead its the execution time. The query takes more than 7 mins to execute and seems this fails the refresh process.
    Thank You

  • 2008 R2/ Win7 Offline Files Sync causing High Load on server

    Hi Everyone,
    I have recently been investigating extremely high CPU Usage from the System Process on my company's main File Cluster.
    We managed to track SRV2.sys Threads causing high CPU load within the system process, but was having issues identifying why this was the case.
    As per Microsoft's direction via support call, we have installed the latest SRV2.sys Hotfixes, but this does not appear to have allivated the issue we are experiencing. We have also added more CPU and Memory into both nodes, which has not helped either.
    We have since managed to create a system dump and is being sent to MS Support for analysis.
    I have noticed the following that appears to happen on our cluster:
    Whenever our CAD/Design department run certain functions within their apps running on a windows 7 client (apps include MicroStation, Revit, AutoCAD etc) we see a massive spike and flatline of the system process.
    We found several users with Windows 7 clients that have Configured Offline File to sync an entire Network Volume (some volumes are 2TB Plus, so would never fit on a users computer anyway, i was quite shocked when I found this). How we spotted this was through
    Resource Monitor, the System Process was trolling through all the folders and files in a given volume (it was "reading every single folder). Now, while this was the system process, we could identify the user by using the Open Files view in Server Manager's
    Share and Storage Management tool.
    I have done a fair bit of research and found that a lot of CAD/Drawing applications in the market have issues with using SMB2 (srv2.sys). When reviewing the products that we use, I noticed that a lot of these products actually recommended disabling SMB2
    and reverting to SMB1 on File Server and/or the clients.
    I have gone down the path of disabling SMB2 on all Windows 7 clients that have these CAD Applications installed to assist with lowering the load (our other options are to potentially shift the CAD Volumes off our main file cluster to further isolation these
    performance issues we have been experiencing.) We will be testing this again tomorrow to confirm that the issue is resolved when mass amounts of CAD users access data on these volumes via their CAD Application.
    We only noticed the issue with Offline Files today with trying to sync an ENTIRE Volume. My questions are:
    Should Offline File sync's really cause this much load on a File Server?
    Would the the size of the volume the sync is trying to complete create this additional load within the system process?
    What is the industry considered "Best Practice" regarding Offline Files setup for Volumes which could have up to 1000+ users attached? (My personal opinion is that Offline Files should only be sync of user "Personal/Home" Folders
    as they users themselves have a 1 to 1 relationship with this data.)
    Is there an easier way to identify what users have Offline Files enabled and "actually being used" (from my understanding, Sync Center and Offline Files are enabled by default, but you obviously have to add the folders/drives you wish to sync)?
    If I disable the ability for Offline Files on the volumes, what will the user experience be when/if they try to sync their offline files config after this has been disabled on the volume?
    Hoping for some guidance regarding this setup with Offline Files.
    Thanks in Advance!
    Simon

    Hi Everyone,
    Just thought I would give an update on this.
    While we're still deploying http://support.microsoft.com/kb/2733363/en-us to
    the remainder of our Windows 7 SP1 fleet, according to some Network Traces and XPerf that were sent to MS Support, it looks as though the issu with the Offline Files is now resolved.
    However, we are still having some issues with High CPU with the System Process in particular. Upon review of the traces, they have found a lot of ABE related traffic, leading them to believe that the issue may also be caused by ABE Access on our File Cluster.
    We already have the required hotfix for ABE on 2008 R2 installed (please see
    http://support.microsoft.com/kb/2732618/en-us), although we have it set to a certain value that MS believe may be too high. Our current value is set to "5", as that is the lowest level of any type of permission is set (i.e. where the lowest level of inheritance
    is broken).
    I think I will mark this as resolved regarding the Offline Files issue (as it did resolve the problem with the Offline Files)...
    Fingers crossed we can identify the rest of the high load within the System Process!!!

  • QuickTime MPEG-2 Component cant playback High Profile High Level content

    Is the QuickTime MPEG-2 Playback Component really unable to playback HP HL content?
    Im encoding high bitrate media content to mediaservers with my Mac Pro and have used VLC so far to playback the encoded files.
    VLC however doesent loop flawlessly so i decided to try QuickTime MPEG-2 Playback Component.
    To my disappoinment it did not play back the High Profile High Level Mpeg-2 content at all.
    It tried, but no image was visible and only a few digital bleeps could be heard from soundtrack.
    Why call it a playback component if it cant do the job?
    Yours,
    Matti Snellman

    Is the QuickTime MPEG-2 Playback Component really unable to playback HP HL content?
    Im encoding high bitrate media content to mediaservers with my Mac Pro and have used VLC so far to playback the encoded files.
    VLC however doesent loop flawlessly so i decided to try QuickTime MPEG-2 Playback Component.
    To my disappoinment it did not play back the High Profile High Level Mpeg-2 content at all.
    It tried, but no image was visible and only a few digital bleeps could be heard from soundtrack.
    Why call it a playback component if it cant do the job?
    Yours,
    Matti Snellman

  • High-level/scripting languages learning thread

    Hi all,
    In recent weeks i have looked into many of the high-level/scripting languages.  All of them easy enough to get into quickly. My problem though is not learning them actually, but that i don't actually have much use now. Sure, from time to time i need a little script for something (and sometimes i then translate that script to lots of languages just for the heck of it like here), but that doesn't amount to much. However on the other hand i'm neither in some job regarding IT/programming nor do i study anything with respect to programming, and i also am not interested in more programming as in compiled languages, system programming or things like that. (At the very least not yet). So i'm doing this just for fun and learning (two of my lifegoals). I am aware of for example Project Euler, however i'm not mathematically interested enough for that.
    So, the purpose of this thread are two things.
    a) I'm asking for suggestions for interesting things i could do with high-level/scripting languages, maybe someone knows of something Project Euler like but for more mundane things and not maths.
    b) So as to give this thread another purpose and not make it only about me, maybe people who have some problem writing a script for something can ask for help. I know of the other thread (the long one, "commandline utilites/scripts"), but that one seems to be more of the sort where someone posts a script he/she uses and then maybe someone posts an answer to that. So for this thread here people should be able to ask for help while creating the script, or even "Where to start". This could serve both the people with the problem and the people wanting to learn more about some language but not finding a way to apply the learning.
    Ogion

    a) I'm asking for suggestions for interesting things i could do with high-level/scripting languages, maybe someone knows of something Project Euler like but for more mundane things and not maths.
    To me, this sounds like the Python Challenge: http://www.pythonchallenge.com/
    Also, if you're not interested in math, maybe you might still find yourself engaged by something like natural language processing, games, or simulations? I personally find the "Natural Language Toolkit" for Python to be a lot of fun.

  • Volume higher level in Vista than OSX?

    Hi everyone, this isn't a major issue but I'm really wondering why does my integrated speakers goes to an much higher level with windows vista than with OSX??? It's the same hardware and so should have the same sound level on both os... and it's not a slight difference for example to get the same level of sound that 1 bar gives me in Vista(I'm referring to the volume pop-up Icon from Apple) it takes me about 3-4 in OSX... Did anyone notice that too?
    And if I'm not alone could Apple explain this situation 'cause I'd love to have the same level of sound in both OS.

    resetting the pram can sometimes restore the volume of your mac to its default settings.
    see Resetting your Mac's PRAM and NVRAM

  • RFC: Proposing a high level iteration facility based on Collections

    I am requesting for comments for an experimental package I developed providing a high level facility for iteration on Java 2 Collections.
    You can browse the javadoc at http://www.cacs.louisiana.edu/~cxg9789/javautils/ and the code is available for download from http://www.cacs.louisiana.edu/~cxg9789.
    Basically, the package provides an interface Task that has a single method job() which is called for every element in a given collection. There are some static methods for using this kind of scheme and iterating over collections. An example would be: Iteration.loop(collection, new Task() {
              public void job(Object o) {
                // do something on o here
            });Now you may wonder what is the use of going into this much trouble when I can just get an iterator and do the same thing? Well, creating a class that represents the whole iteration opens a number of new possibilities. You can now have methods and variables exclusive for the specific iteration and reuse it. You can even subclass it for variants. This proved very useful in my application especially when I developed the StringTask class that is available in the same package.
    Nevertheless, you can see it for yourself that we've got rid of the iterator and the condition checking that appears in conventional loop constructs. For details you can look at the Iteration at http://www.cacs.louisiana.edu/~cxg9789/javautils/edu/ull/cgunay/utils/Iteration.html and StringTask at http://www.cacs.louisiana.edu/~cxg9789/javautils/edu/ull/cgunay/utils/StringTask.html
    I was wondering if you Java developers would find such a scheme useful. Thanks for your interest..

    Heh... now I need to remember back...
    No I don't, the internet comes to the rescue :-) I
    was wrong, apply is the simple "function apply"
    function, the list function of interest was (is)
    called map.
    http://www.cse.unsw.edu.au/~paull/cs1011/hof.html
    This was very helpful. Now I know what you're saying. Actually this may very well be the hidden influence that lead to this system. The map operation also exists in Lisp and Scheme. I remember really liking it when I first learned about it.
    My approach isn't exactly the same as neither of map, fold or filter. But I think I can create subclasses of Task which act the same way with these. I will try to do this in the near future.
    Just as a comment on the design, I would have
    maneuvered the list of getSomeResult() into an object
    that knows how to render itself as a String. That may
    well be using your Iteration feature over the internal
    list, or other feature as the developer sees fit.This point is well taken. Actually my program uses this kind of approach in many places, however I wanted to give a less confusing example here.
    Your iteration is a perfectly valid pattern to iterate
    over a collection, and one that could be
    applied in many places in my code. I'm unsure I
    will migrate to use it, because it tends
    towards a large number of small objects to perform its
    function... which sometimes can be a simplification,
    but in this case can obscure multi-threading issues...
    spreading a loop over a collection across multiple
    classes makes it less obvious what is happening to the
    contents, or what synchronization is required, what
    locks have been acquired, or what concurrent
    modifications are possible.On the other hand, it might be more secure to have operations on collections in a place as an Iteration class to keep them together. Maybe you already mentioned it in your message. I can understand your concern in using the system, though.
    To keep this information in one place, you could use
    an anonymous inner class, but then you have lost the
    reusability and succinctness of the iteration, which
    are two of its largest benefits (Being a high-level
    function using centralised, tested code being the
    third, and probably the largest)I started using small inner classes very extensively. Maybe this can alleviate the problem, since they're not anonymous and can be reused. However, there is still a problem using (subclassing) them from outside of the class. I found a way to do this, too. It only works in special situations, though.
    Assume you have an inner class:
    public class Outer {
      class Inner { ... }
    }You can extend this inner class if you have another class extending the Outer:
    class NewOuter extends Outer {
      class Inner extends Outer.Inner { ... }
    Thinking about it, your iteration is at a higher level
    than foreach... and would benefit from using it
    if were ever supported by the JVM. They are slightly
    orthogonal approaches to looping abstraction, foreach
    being syntactic and your pattern being heuristic.We're in tune here, I'd be interested to use the foreach operator and the Iterable interface as primitives in my system if they will ever be provided in Java. Currently, they are not offering me anything extra, since the Collection interface provides me with what I need.

  • Trcing down to JOB or program level from Short dump or system log in APO.

    Hi Guru's
    Can anyone please guide me how to trace down to JOB or program level from ST22 and SM21 logs.
    I have SQL errors and lock entries in APO system, I want to find which program or job writing them.
    Thanks in advance,
    Sreerama.

    Hi,
    Thanks for the replay,
    I have checked same way as you suggested >>>> in ST22 dump, but it is taking me to Include "LARFCF06" which is trying to delete ARFCSTATE table.
    and I have error message :
    Database error text........: "[1205] Transaction (Process ID 291
    deadlocked on lock resources with another process and has been
    deadlock victim. Rerun the transaction."
    Internal call code.........: "[RSQL/DELE/ARFCRSTATE ]"
    Please check the entries in the system log (Transaction SM21).
    User and Transaction
        Client.............. 100
        User................ "APORFC"
        Language Key........ "E"
        Transaction......... " "
        Transactions ID..... "002FE8DE2A71F1B182E80019BB345F90"
        Program............. "SAPLARFC"
        Screen.............. "SAPMSSY1 3004"
        Screen Line......... 2
        Information on caller of Remote Function Call (RFC):
        System.............. "P48"
        Database Release.... 700
        Kernel Release...... 700
        Connection Type..... 3 (2=R/2, 3=ABAP System, E=Ext., R=Reg. Ext.)
        Call Type........... "synchron and non-transactional (emode 0, imode 0)"
        Inbound TID.........." "
        Inbound Queue Name..." "
        Outbound TID........." "
        Outbound Queue Name.." "
        Client.............. 100
        User................ "APORFC"
        Transaction......... " "
        Call Program........."SAPLERFC"
        Function Module..... "ARFC_DEST_CONFIRM"
        Call Destination.... "NONE"
        Source Server....... "debosap172_P48_21"
        Source IP Address... "10.132.184.172"
        Additional information on RFC logon:
        Trusted Relationship " "
        Logon Return Code... 0
        Trusted Return Code. 0
        Note: For releases < 4.0, information on the RFC caller are often
        only partially available.
    Information on where terminated
        Termination occurred in the ABAP program "SAPLARFC" - in
         "DELETE_ARFC_ORPHANS_WO_COMMIT".
        The main program was "SAPMSSY1 ".
        In the source code you have the termination point in line 508
        of the (Include) program "LARFCF06".
        The termination is caused because exception "CX_SY_OPEN_SQL_DB" occurred in
        procedure "DELETE_ARFC_ORPHANS_WO_COMMIT" "(FORM)", but it was neither handled
         locally nor declared
        in the RAISING clause of its signature.
        The procedure is in program "SAPLARFC "; its source code begins in line
        489 of the (Include program "LARFCF06 ".
    Now I want to find how this dead lock is created and which two (programs or JOB's) causing this dead lock.
    And program "APLARFC" is not a exicutable program and even I try to where used list I am not able to find exicutable program.
    now I need to find which are the two programs or jobs causing this loack.
    Thanks in Advance,
    Sreerama.

  • HDMI Audio not working on Q190 (along with all higher level Audio Formats)

    Help, I have been given the run around via support, I cannot get the HDMI audio to work with my Pioneer Surround Sound, only the Intel display audio shows in control panel (Win 8 X64) and the RealteK S/P Dif port and it is not capable of supporting 7.1 sound or bitstreaming or DTS, Dolby HD, Etc. Tec support appears not capable of fixing the issue and wanted to send me to software support and pay. I have only had the machine for 4 days and it has never supported higher level sound.
    Every other device I have (had or currently) connected to the receiver works just fine. I have to figure this out or return the machine, the audio is the most important aspect for me. Besides when you advertise 7.1 support the machine you sell should be able to do it.

    Hey guys,
    I have had this Q190 with the Celeron CPU since last week and I am using XBMC Frodo and the HDMI is connected to my AVR Onkyo TX-NR809 and from the Onkyo to the TV. And the sound is 7.1 with PLIIZ. It works fine. I think it may be some driver problem because Realtek Audio which is in the Q190 works fine with the Win8 preinstalled. Realtek is kind of bad with driver because I lost my wifi after upgrading to Win 8.1. After a few days with no wifi, I found out that the driver was bad, yes it was a Realtek wifi driver but posted by Lenovo for W 8.1.
    I have another friend who also just bought the Q190 and he reported no audio problem so I think it is just a matter of trouble shooting the drive and configuration. I do love the form factor of the Q190.

Maybe you are looking for

  • A few questions on my "new" PowerBook G4 12" 1.5GHz

    Bought a very nice PowerBook G4 on eBay last week and want to update it a bit.  The seller only used it about 2.75 years.  It came with all of the books (including Apple decals!), install disc and two different display adapters for $130! The seller d

  • Passing Dynamic File Name to ODI nterface for processing to another system

    Hi, I need help regarding passing a Dynamically Name changing fixed length Flat File in ODI Interface. This interface is built for taking the Flat File as Input and process it to SQL Server by applying Data Mapping and transformations... The input Fl

  • Need an opinion about FieldPoint RT Communication

    The system I am controlling with compact Field Point consists of the chamber (initially filled with air), turbo pump (below), valve on the right of turbo pump, scroll pump below. The I/O boards installed in FP are : DO-403, RLY-421, DI-301, DI-330. I

  • How  to calculate taxes in invoice tab of me23n t.code

    Hi, I need to print the taxes shown in invoice tab for a particular PO. I have used related fn modules for calculating but i'm not get the exact values as shown in the sap. I have even searched the forums but didn't get any relevant answer. Please he

  • QM Module

    Hi, Can any one tell me the step by step process used in QM for GR for Production Inspection type (04) and about QE51n t-code Like how to create Inspection lots,Material charecteristics , Inspection plan etc. Please help me in understanding the proce