Overhead:issue

Hi experts,
Pl advise,i have configured calculation of overhead on Production order through OPL1.
1)Shall i also maintain overhead structure through (KSAZ)
2)How material overhead(in cost sheet) will calculate on Material base.
3)How manufacturing overhead (in cost sheet) will calculate on production base.
4)In CO01(production order),i have mentioned "overhead key"in control data.How it will connect
  with cost sheet,while i run KGI2.
Pl advise.
Regards,
Samar

Hi,
1) YES
2) You have to assign Material to cost element and optionaly to Origin (this field maintain in Material master). Origin enable you to SPLIT the base with the penality of too many cost elements. In the extreme case you can define one origin per material but in most cases origin classify material group. Alternatively use can use USER EXIT (EXIT_SAPLKASC_001).
3) Use USER EXIT (EXIT_SAPLKASC_001). I use it for one of my customers for (E.G) enregy expenses.
4) Overhead jkey is linked to overhead group (OKZ2) which is entered in the material master (costing 1).
The system use overhead group to determine the overhead rate.
Benni

Similar Messages

  • Cost Sheet and Overheads issue

    Hi,
    In my costing sheet i have to assign selling overheads.
    for that i am collecting all expenses to one selling overheads cost center. so where i have to assign this cost center in costing sheet. upto COGM our company dont want to assign overheads. after assigning of selling overheads COGS should come. my question is where we have to enter our selling overheads cost center to cost sheet and when. please guide me with one example also.

    <b>Question 1</b>
    <b>I have to assign selling overheads. For that, I am collecting all expenses to one selling overheads cost center. So, Where do I have to assign this cost center in costing sheet?</b>
    You specify the cost center to where the credit shall go to under Define Credit. This is under Costing Sheet  Components > Define Credit. Later you assign the credit under the relevant costing sheet.
    <b>Question 2</b>
    <b>Upto COGM our company don't want to assign overheads. After assigning of selling overheads COGS should come. My question is where we have to enter our selling overheads cost center to cost sheet and when. please guide me with one example also.</b>
    You create the cost component OKTZ, wherein you specify the CC structure. Like, in the cost components with attributes,
    10 for Raw materials
    20 for Labour
    30 for Mfg O'head
    40 for Selling OH.
    Select the component associated with Selling OH (say 40 here) and click on the magnifier icon (Goto> Details), select the Sales and Administration Costs radio button. This is under the filter criteria for cost component views on itemization.

  • ZFS disk overhead issues

    Hi,
    Been experimenting with zfs and I need some guidance. Here is my setup:
    4 whole disks used for ZFS raidz1
    c1d0 110GB usable
    c1d1 230GB usable
    c2d0 230GB usable
    c3d0 230GB usable
    # zpool list
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    pool 446G 221G 225G 49% ONLINE -
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool 166G 163G 48.6K /pool
    pool/vault 166G 163G 166G -
    Here are my questions:
    1. Why is the pool size only 446G instead of 570G?
    2. Why does zfs list only show 329GB?
    Very confused.. Please help.
    thx in advance.

    If you use a mix of disk sizes for a raidz it treats them all as if they are the size of the smallest disk..

  • Hard drive performance and data throughput

    I am using my macbook pro for work primarily and part of that entails creating/restoring images of other macs. I've had the best luck with SuperDuper however the process is still VERY slow. For instance at this moment with no other applications open other than S.D. and firefox the copy speed is under 5MB/s from my MBP to an iMAC via fire wire.
    I am looking for suggestions to increase the performance/IO in the hopes to speed up the process. When purchasing this system the 7200rpm drive was not an option (15") which is unfortunate. I realize that both hard drives in the operation will cause the variable in speed but I want the sending drive as fast as possible.
    My thoughts right now are to purchase a 7200rpm external drive to store backup images and also send from. This would cut out any possible IO on the drive that my mac is performing to run the operating system. Another thought was to upgrade my mac to a 7200rpm drive and use the current 5400rpm drive as the storage for images...in the hopes that it would still provide an increase in restoration speed since it wouldn't be running OSX on it.
    Any thoughts or ideas? Experiences? My MBP has the 5400rpm I believe and 2GB of ram.
    Thanks

    I'll try and explain a bit better. I'm not restoring
    the same image to different types of macs. I create
    images of OTHER macs using my macbook pro to perform
    the process as well as store the backup image.
    Thanks for the clarification. I do that too, but when I do I use my Mac Pro to clone a Mac via Target Disk Mode to an external FireWire 800 drive.
    it helps but its a usb2 enclosure with a somewhat
    older hard drive that is only 30Gb. I am looking at
    purchasing a firewire 800 external drive but I will
    see how this other unit works for now since we
    already have it.
    Part of your throughput problem may be the overhead issues with USB 2. FireWire uses its own chipset so is more independent of the CPU, and FireWire can sustain high-speed transfers at a higher level. USB is CPU-bound and is more vulnerable to CPU demands from other apps or background processes or other USB devices. So even though USB 2 has a higher theoretical peak (480Mbps), FireWire (400Mbps) actually does better in the real world.
    About USB 2 vs. FireWire 400 performance
    I'm not sure if FireWire 800 would help because your slowest drive in the chain may not be fast enough to take advantage.

  • Web Service from RFC function module

    Hi all,
    I'm searching for a way to create web services from RFC-enabled function modules. I know there is a wizard, but it's not suitable for my problem:
    I have around 30 function modules (number raising in near future) I want to enable for web services, most of them share the same structures representing the underlying data model. When one structure is modified (i.e. a new sub-structure is added) I have to recreate all web services, which is quite uncomfortable.
    In addition, using the wizard results in one wsdl-file for each web service operation, that leads to an administration overhead using this web services from a java frontend.
    So I'm searching for a "bulk creation". I also know the wizard can be used for a function group, but I can't put all my RFC-enabled function module in one function group, that would lead to confusion in the package.
    Does anyone know how web services are created programmaticaly? If I had a function module which creates the web services I could write a function module which creates all web services I need.
    Thank you in anticipation!
    Florian

    I'm not sure I understand the 're-create' part...how are you re-creating the web service?  if you modify the interface of a function-based web service, then you use the 'modify operations' functionality (context menu - right-click) to regenerate the web service interface; this works for a function group based web service as well and hits all of the methods at once.  As for the java overhead on function group services, stating the obvious, maybe you just need to lower the number of functions in your groups.  We use .Net-developed applications with multi-function webservices and don't have any overhead issues.

  • Why are still photos blurry when I play movie?

    I am making a video slideshow using Adobe Premiere Elements 9. Most of the video is made up still photos. When I playback the video the still photos are blurry. What should I do to prevent this? Thanks!

    I read your article Bill, and that just seems like a massive amount of work for the average individual who wants to put together a movie. Is all of that really necessary?
    With PS, or PSE, and Scaling Actions, one can do batch processing of entire folders of images in moments. That linked article gives setup tips for doing batch scaling very easily, and the Action, plus Automate to Batch, will work on entire folders. My workstation will automatically process ~ 500 4000x3000 pixel images to 720x480 in about 2 mins.
    There are two benefits when Scaling stills in an image processing program, prior to Import into an NLE (Non Linear Editor) program:
    The processing overhead is greatly reduced
    The quality is greatly improved, as the Scaling algorithms in an image processing program, are much better *
    If one wants to let the NLE program do the processing, and one has many large images (especially at still camera resolutions), then they WILL need a very powerful computer.
    Good luck,
    Hunt
    * PrPro, as of CS5, has improved the quality of Scaling, but the overhead issue still exists.

  • CRUD insert problems.

    Hello all,
    I was wondering about DB operations in JSF pages, so I took a look over the Single Page CRUD example. What hitted me was there is a need for a two step insertion, first by issuing a select in search for the biggest ID of the primary key, and after that the insertion of the element with that obtained biggest ID + 1. I see at least 2 problems with this approach:
    1. Concurrency issues.What happends if 2 users are issuing at the same time an insert operation over the same table? There is the possiblity of getting the same ID to insert, and the first one could insert, but the second one would fail even if it's request is logically corect (validated & converted). I see three solutions over the insert problem:
    a. lock on the database (if it's possible).
    b. using a synchronized block in the application bean to get the ID and insert.
    c. using DB specific constructs (e.g. MySQL's AUTO_INCREMENT)
    2. Overhead issues. Why doing in two steps an operation that should be just an insert? Previous a. an b. approaches do not solve our overhead problem, because we still have two steps in insertion; we only synchronize them.
    I was wondering which is the best practice for production quality web applications. Personally because I've picked MySQL as DB I've used AUTO_INCREMENT, but the immediate huge and obvious drawback is dumping DataProvider's capability of changing the storage medium at a glance.

    I'm not sure if I entirely understood your questions here.
    - Concurrency problem.
    database bound Data provider underneath uses CachedRowset, which uses SyncProvider to take care of concurrency problem. If the default RIOptimisticProvider is not enough, it possible to register other more sophisticated SyncProvider.
    You can read about it here.
    http://java.sun.com/j2se/1.5.0/docs/api/javax/sql/rowset/CachedRowSet.html
    - Overhead issue
    I believe, it is possible to let the DB auto increment the primary key field, and left it out in the insertion from data provider.
    - Winston
    http://blogs.sun.com/roller/page/winston?catname=Creator

  • Workflows and OWB 10.1

    We have run into a brick wall here and need some advise on how to proceed:
    Our DW has >50 facts and dimension tables, for each table we have a mapping that loads the table from an external table and a process flow (workflow) that:
    - looks out for the flat file generated by the extractor process. The extractor processes are written in Cobol and are executed outside OWB.
    - renames the flat file and validates its contents
    - executes the mapping
    - performs error handling (moves the file, send e-mail etc.)
    All the process flows are independent of each other and can be executed at any time.
    One of the requirement we have is to support execution is to support data loads every N minutes, where N is in the range of 2 mins to 1 Month. This means, each process flow can be executed very frequently and this is where is we run into problem.
    The issue we are seeing is OWB RTP service hang every few hours. This causes the session that executes process flow hangs. When we see log for RTP service 10.1.0.2, we see RPE-01003 (Infrastructure error caused by Closed Statement - next). Inside the workflow monitor, we see the activities in COMPLETE state and see no error. This can happen for any activity of the process flow.
    We have been working with Oracle Support to resolve this and have tried various things including upgrading to OWB 10.1.0.4 without any success.
    Lately, the message we have received from Oracle support is that what we are doing here is unusual i.e. process flows are being executed too frequently and the number of process flows are too high. As per them, the OWB-Workflow interaction has an overhead that causes this error.
    We disagree as we do get this error even when the jobs are executed at a low frequency but as obvious, the occurence then is quite infrequent. We think the error happens regardless of the frequency so Oracle should look it as an issue. Also, if it is an overhead issue, we need to know what kind of resource issue are we running into and how can we avoid it.
    What we would like to know is:
    - Are we doing something unusual here? If yes, how can we change our process flow design to avoid this?
    - Has anyone else run into this issue? If yes, how did you workaround it?
    - Can we depend on OWB RTP and Workflow to be fail proof? If no, then how can we recover from the failures?
    Any relavant information would be helpful.
    Thanks,
    Prashant.

    Had this issue and Oracle support doesnt able to procedure any stable solutions. I have tried with stop OWB serveice and start OWB service, which went quite nice. Do not required to bounce the DB for this.
    Approximately how many processes you have. I guess you can make all stage area(external to oracle) 1 or 2 and fact & dim will be based on the business rule. No matter I had it over 40 processes for DW with OWB 9.2.0.2.8
    Verify the v$resource_limit.
    If you want you can try with; from Workflow repo (owf_mgr default) exec wf_purge.total
    then run this wfrmitt.sql , which will remove all passing packages (almost clearing out the wf rep, if you pass all)

  • PPBM CS4 Eye opener!!!!

    My CPU is older, but nonetheless, something seems off...
    I've uploaded my results to Bill Gehrke.
    Bill seems to think there's something aloof.
    I've taken Bill's and Harm's advice..Tried every different scenario there is, changed scratch discs, cache locations, etc... but still can't seem to get faster throughput.
    When going through AME, the CPU bounces back and forth between 100% and 40%, but only averages around 55%..If that's normal, please let me know
    Bill was kind enough to let me know that something was wrong with my .AVI output times. I agree..It shouldn't be 40 seconds, but rather 10 at most.
    Is it possible that the "Use Preview file" button isn't working correctly in AME? Can anybody verify this, because something's definately wrong.
    I'm not officially submitting this, because there must be something i'm overlooking.
    I've got 3 drives, and another 2 on raid 0....
    Please help?!?!
    Here's my results.
    Pijetro,   Personal or Computer ID
    Tyan kw8e,  Computer Manufacturer
    HP wx9300,   Computer Model
    240.8,  secs Total Benchmark Time
    45.5,  secs AVI  Encoding Time
    127.3,  secs MPEG Elapsed Time
    68,  secs Rendering Time
    AMD,    CPU Manufacturer
    OPTERON 270,    CPU Model
    2,    GHz CPU speed
    2,    Number of CPU chips
    4,    Total Number of Cores
    8,    GB RAM
    4.2.1,    APP Version PPBM4 DV
    Win 7 64,    OSVersion
    SATA,    OS Disk Interface
    80,    GB OS Disk Capacity
    10,000,    OS Disk Speed
    SATA,    Project Disk Interface
    500,    GB Project Disk Capacity
    7200,    Project Disk Speed
    SATA,    Preview Disk Interface
    500x2 R0,    GB Preview Disk Capacity
    7200,    Preview Disk Speed
    SATA,    Output Disk Interface
    500,    GB Output Disk Capacity
    7200,    Output Disk Speed
    ATI RADEON X1800,    Graphics Board
    ,    Comment

    Thanks Harm...
    1. Yes
    2. Yes
    3. Yes
    4. Yes  (thanks for that tip)..
    I'd like to make a few comments and observations.
    As far as encodes and render times, i can't control that. I realize my older Opterons are old donkeys.
    But nonetheless, i've used third party Mpeg encoders in the past, and i've absolutely smoked these speeds that AME is using..
    As far as the .AVI's being saved out from "Preview files", i find it totally unacceptable that 40 seconds is average..Any software i've used in the past (including CS2.0 and VirtualDub), allowed for a direct stream copy of pre-rendered footage. It's simply a matter of file muxing, and shouldn't have anything to do with HDD bottlenecking. It's a matter of file transfer.
    Harm, i've gone as far as i can go without stripping my system to the bone. All results are within 2% of each other, regardless of disk management.
    The tweaks you mention are great for an athelete looking to shave another 1kilo of body fat, but i'm working with a 1 tonne elephant that doesn't want to move!!!
    And while i'm on the subject of disk locations, scratch, cache files etc..
    I've noticed lots of suggestions about disk speeds and such. Unless i'm working with uncompressed HD material, how can a DV clip that has only 25mbps output, have any negative effect on disk to disk output? I'm working with HDV, DV and Matrox .AVI formats, and my data rates don't come anywhere near bottlenecking HDD output speeds? I'm of the opinion, that as long as data is not being read/wrote on the same drive, file transferring shouldn't be an issue...
    I'm starting to develop the opinion, that unless there's a setting that i've totally overlooked, AME has some huge overhead issues. Especially with imported sequences..
    I'll fight the good fight, but i'm starting to think it's over...

  • Reaching out for Enterprise Security Help

    My current environment is a medium size hospital with mulitple campuses. We have a number of different types of devices; Laptops, CoW's (Computer on Wheels) 7921's, BlackBerry's. Currently the majority of my clients are running WPA/WPA2-PSK. Personally, I'm sick to death of PSK. It's an easy and samll footprint, but managing keys is a major pain in the butt. At any one time I have an average of 500 clients connected to my WLC's (4.2.205). I've been trying to run a project on moving the devices to an EAP scenerio. Laptops work fine in EAP-TLS as do BlackBerry's but as everyone knows, EAP-TLS has some authentication overhead. Here's my problem, the CoW's. The CoW is simply a mini-pc put into a specialized cart that the nurses pull from room to room for BedSide Meds and such. With EAP-TLS testing I'm having a lot of issues with the authentication taking to long and the user getting kicked out of their app, Meditech. Our version of Meditech is basically a crap telnet application and if it doesn't get a response quickly it'll throw you to the desktop. Also, although I know EAP-TLS had some overhead, I'm dissapointed in it's roaming ability and how slow it is. As I see it, the users I have testing EAP-TLS on laptops and Blackberry's are not truely mobile. They typically don't attempt to use their device while on the move versus's the CoW. Here are a few things I've ran into in trying to figure out a security solution and hopefully you guys can help me out and suggest somethings I haven't thought of:
    EAP-TLS - Obvious overhead issues as stated above. Is anyone running this in a similiar environment, how do you deal with it?
    PEAP - Rely's on a strong user/pass which does not work in our world. The nurses log into the CoW witha generic username/password that pretty much everyone is aware of. Although Windows it's self is locked WAY down, your still on the network if you have access to this user/pass.
    EAP-FAST - As I understand it, with EAP-FAST and MSCHAPv2, there's a PAC for each user. If the user logs in more then once from different locations, I suspect this would be a problem. Not to mention I'm not sure how the manageability on usernames would work. I looked at using the Certificate on the machine to do the authentication and setting EAP-FAST to require this for autehntication and it works fine for my laptop and the IntelPro/Set Wireless utility but on the CoW's, not so.. The Cow's have an Atheros AR5006x chip and with the Atheros Client Utility, the utility will only allow you to select a personal cert, not a machine certificate for anything. Does anyone know of an Client Utility that will allow me to do this with out spending $$$$ or of Atheros Client that will allow me to do this?
    How is everyone else providing an enterprise solution with manageabillity and stability?

    Extensible Authentication Protocol (EAP) is an IETF RFC that stipulates that an authentication protocol must be decoupled from the transport protocol used to carry it. This allows the EAP protocol to be carried by transport protocols such as 802.1X, UDP, or RADIUS without having to make changes to the authentication protocol itself.
    •PEAP MSCHAPv2-Protected EAP MSCHAPv2. Uses a Transport Layer Security (TLS) tunnel, (the IETF standard of an SSL) to protect an encapsulated MSCHAPv2 exchange between the WLAN client and the authentication server.
    •PEAP GTC-Protected EAP Generic Token Card (GTC). Uses a TLS tunnel to protect a generic token card exchange; for example, a one-time password or LDAP authentication.
    •EAP-FAST-EAP-Flexible Authentication via Secured Tunnel. Uses a tunnel similar to that used in PEAP, but does not require the use of Public Key Infrastructure (PKI).
    •EAP-TLS-EAP Transport Layer Security uses PKI to authenticate both the WLAN network and the WLAN client, requiring both a client certificate and an authentication server certificate.
    http://www.cisco.com/application/pdf/en/us/guest/netsol/ns386/c649/ccmigration_09186a0080871da5.pdf

  • Configuring a HA service on Sun Cluster 2.2

    I am a Product Manager working with customers using Oracle software on Sun Cluster 2.2. My question is, how can I configure a service to bind to logical/virtual address, so as to make it available at the same address after failover? Are there cluster specific steps that I need to take in order to achieve this?

    In the OPS environment there is no use of the HA-Oracle agent, the same instance of the database is running on the different node. The failover is from the client side, because all nodes access the same shared disk space database. The tnsnames.ora files is modifed so that if a transaction fails the client will try the other nodes. The OPS environment also uses the Oracle's unix distributed lock mangage(UDLM), so there is some overhead issues.
    let me know if this the info you needed,
    Heath
    [email protected]

  • AP invoices, assets and Project Costing

    Hello:
    We have a business environment where some invoices constitute a single asset. As a result we would like to avoid the overhead of creating assets and processing these assets that are comprised of a single invoice through Projects (capital projects screen).
    We would like to have these invoices associated with a Project, Task, Expenditure Type, Org but instead route these invoices into mass additions instead of routing them through projects.
    I belive that the Concurrent process called "Mass Additions Create Program" automatically routes any invoice that has been associated with a project directly into projects and does not allow the option of having these routed to mass additions.
    My question is this. Is there any alternative here? Project Costing is forcing us to route these single invoice assets though Projects. Are there any other options or ideas or do we just have to live with this short-coming?
    Many thanks,
    Robert
    null

    I guess the only shortcoming is if you are trying to run these one line assets into a Capital Project rather than an Indirect or Contract Project. For in the case of the latter, your intended behaviour is certainly acheivable without having to resort to a workaround.
    However, if you're trying to send these one line assets from AP into PA to a CAPITAL Project, as well as from AP into FA, your stuck. The system will not allow you to push to FA from AP if the line is also going to a Capital Project and the line is distributed to an Asset Clearing Account "CIP Account".
    The manual is a little confusing on this point, but I am certain that for the Mass Additions Create program to pick up a line from AP and move it to FA, the line must have a CIP account designation if it is distributed to a Project. Thus, if the Project is also a Capital Project it can't be done because of the above outlined restiction.
    There was a little bolt-on developed a few years back in the TeleCom vertical at Oracle which automated the creation of the Assets from within the Capital Project as telecom companies experienced similar overhead issues with having to "manually" define and move assets out of a Capital Project into FA.
    Hope this helps

  • Help - convert 60p sequence to 30pe

    I've built an entire sequence with a lot of music and motion keyframes in 720p60. The original footage was shot in 60p. I decided to try to move it to a 30p sequence, because I believe 60 is causing lots of output overhead issues. I cut and pasted everything from 60 to 30, but all of the timing got corrupted. Is there a better way to migrate and still retain all of the clips to edit?

    What are these "output overhead issues?"
    YOu can't just change frame rates like that...the only way to get good proper frame rate changes would be to take all the master footage into Compressor and convert it there, then re-edit the sequence with the new clips. The way you are doing it is rife with issues.
    But what are the overhead issues, perhaps we can solve that.
    Shane

  • Fortigate on Windows Server Problem

    Someone suggested I post this question here...hopefully, this is the right place, and someone can give me an answer.
    I was just given a new Mac Pro with OS X 10.6.8. Previously I was working on a Windows machine.
    Switching over went well...I have access to the Exchange Server and am able to get all of my email. I can also access the Windows server volumes and have no problems with that.
    My problem is that I'm hitting a wall (literally) with our company's Fortigate Firewall. On my Windows machine, I had full access to the internet. But, with my Mac, for some reason, Fortigate is not seeing me as logged in as a Fortigate user, so, I'm blocked. Unfortunately, our "IT Manager" is someone who hates the Mac platform, was unwilling to approve the purchase of a Mac, then basically said that I was on my own as far as technical support. We do have an outside agency that handles all of our tech support (our "IT Manager" is basically just a liason for the off site support company), but the off site support company also seems to not know a whole lot about Macs.
    When I asked the off site company about possible solutions to my problem, our "IT Manager" got involved, did some ranting about the incompatibility of Macs, and I was basically told:
    A Mac, as it does not perform NTLM authentication to the domain by default, will not be able to authenticate and show up as a user in the FortiGate.  It will continue to fall into the ‘guest’ account category.
    There are three ways around this:
    Add the Mac to the domain and make it login every time (this will present other management overhead issues regarding file share access, printer access, etc., which can certainly be taken care of, it will just be about 1-2 hours of on-site work to test and configure completely).
    Give the Mac a ‘sticky’ DHCP address so that it uses its own UTM policy on the firewall (easy way out).
    Create an approved corporate IT policy and force her to use a PC by disallowing non-Windows based PCs on the network as they present an unmanageable security risk (Macs are usually supported on Windows-based networks in corporate environments, but only on a best-effort ‘you-choose-it-so-you-support-it’ basis).
    I then asked the off site support company whether or not I can configure my Mac to use a proxy server ip address and solve the problem that way, and I was told that the firewall "was not set up that way."
    I know that Fortigate is compatible with Macs, and it just seems like this is being made out to be more difficult than it has to be. I'm ok with doing my own tech support for my Mac, but the IT people I'm dealing with just seem to either refuse to give me any useful information, or just seem to not know what they're doing (which is typical). I hate to keep going to our IT people with this problem, because every time I do, I (and my boss) get an earful about how Macs are incompatible and we should not have allowed the Mac on the network in the first place.
    So, I guess my question is: Is there a way that I can set this up myself on my end so that the Fortigate Firewall will see me as a registered user? Or are the solutions listed by our support service the only options for me?

    Maybe it's a little late, but if it helps someone.
    I have had the same problem and the cause was that the host name was too long.
    I put a name shorter and the error has disappeared

  • Get the number of concurrent users for a WD application

    Hi all,
    I need to implement a method which counts the number of concurrent users for a Web Dynpro application.
    Say I have a portal with X applications, is there API that can tell me how many users are working on specific application?
    I know that this line of code
    IWDClientUser currentUsers[] = WDClientUser.getClientUsers();
    will get me the number of users that are working on ALL Web Dynpro applications. Is there a way of getting it for specific application?
    I thought of using a text file/small DB table that set/get the number of users but I think it will cause overhead issues.
    Any ideas how to implement it?
    Thanks,
    Omri

    Hi Omri,
    Try using  WDServerState.getActualApplications() method.It returns array of actual running applications.Compare ur application name with the array elements and keep a counter for matching elements.
    String app[]=WDServerState.getActualApplications();
        for(int i=0;i<app.length;i++)
              view.getComponent().getMessageManager().reportSuccess(
               app<i>);

Maybe you are looking for

  • Clearing the report from cache....

    Hello all,,, In my project iam using crystsal report for eclipse to generate reports. Iam using a .jsp page to call my report.(which i got it from the examples of CRE). The name of my jsp page is poi.jsp The name of my Crystal report is CrystalReport

  • CRASH with WIndows XP - arrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrgggh

    My Windows crashes and all I do can't stop it. Any thoughts? # An unexpected error has been detected by HotSpot Virtual Machine: # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x02609558, pid=3472, tid=2924 # Java VM: Java HotSpot(TM) Client VM (1.5

  • Syntax error but I'm too blind to see...

    Using the below SQL from my usrStartFrm.ASP page to ORA 8i I have set a page variable = a session variable +++++++++++++++++++++++ dim usrLoginID SUB usrStartFrm_onenter()      usrLoginID = Session("LoginID") End SUB +++++++++++++++++++++++++ But I c

  • Steps for Master data upload from ECC6.0 to BI7.0

    Hi experts, I need to load material master and customer master data from ECC6.0 to BI7.0 and i could not see the steps to do so in sdn. Can anyone please give me the steps from the beging till end usign Business content 0material_attr/text/Hier . I W

  • Moving around the canvas in Preview

    Without a trackpad, is there an easy way to navigate around a large zoomed in document while in Preview? (using scroll bars is not easy) In most Adobe apps, if I hold down the spacebar, I get a hand tool that allows me to pan around an image. But I d