Opinions / Expert Knowledge

I have implemented a working prototype of what I call a "Last one out turn off the light" design.
Here is the design in a nutshell:
Client creates a stateful session bean.
Stateful session bean looks for entity bean with the primary key of the client's IP address. If such an entity bean doesn't exist the session bean creates it. The session bean increments a counter on the entity bean.
When the client closes down it removes the session bean which decrements the entity bean's counter and then tries to remove the entity bean.
The entity bean checks it's counter. If the counter is greater than 0 then the entity does not allow itself to be destroyed.
If the client crashes the session bean is able to decrement the entity bean and then try to remove it. This was no small feat considering that in EJB 1.1 the container neglects to notify the session bean when the container decides to remove it. No one has explained the reason for this 'feature' of 1.1 to me.
Here's the question. Some people on my team argue that we shouldn't do this because there is no conversation going on between the client and the session bean and therefore it does not fit the purpose of a stateful session bean. To me this argument is like saying I shouldn't use a screwdriver to pry the lid off of a paint can because screwdrivers are made for turning screws, not opening paint cans.
Does anyone have an opinion on this? Am I a dangerous rebel who is perverting the EJB spec for my evil purposes? Am I completely off base and missing the whole point here? Does anyone like my idea? (it works very well by the way)

I have implemented a working prototype of what I call
a "Last one out turn off the light" design.
Here is the design in a nutshell:
Client creates a stateful session bean.
Stateful session bean looks for entity bean with the
primary key of the client's IP address. If such an
entity bean doesn't exist the session bean creates it.
The session bean increments a counter on the entity
bean.
When the client closes down it removes the session
bean which decrements the entity bean's counter and
then tries to remove the entity bean.Yes, like a Object reference counter.
The entity bean checks it's counter. If the counter
is greater than 0 then the entity does not allow
itself to be destroyed.The Entity-ejb cannot prevent it's self from being destroyed. This is decided by the container, so unless you've re-written that, the Entity-ejb only gets a chance to perform an action first in ejbRemove().
If the client crashes the session bean is able to
decrement the entity bean and then try to remove it.You should be able to call your reference counter from your session beans ejbRemove() method.
This was no small feat considering that in EJB 1.1
the container neglects to notify the session bean
when the container decides to remove it.Unless this is an implementation fault on your container, this is also achieved by implementing ejbRemove(), in your session bean.
Here's the question. Some people on my team argue
that we shouldn't do this because there is no
conversation going on between the client and the
session bean and therefore it does not fit the purpose
of a stateful session bean.I wouldn't see this as a problem, there is an implicit state [conversation] of connected, this is different from an event/message based model, where this wouldn't hold true, and a state-less session would be used.
To me this argument is
like saying I shouldn't use a screwdriver to pry the
lid off of a paint can because screwdrivers are made
for turning screws, not opening paint cans.Tell them it's closer to using a screw-driver to remove a screw cap lid :) and not to take things to literal and remember conversation is a metaphor.
Does anyone have an opinion on this? Am I
a dangerous
rebel who is perverting the EJB spec for my evil
purposes? "A rebel, perverting the spec"; probably not! And personally I like to be a little dangerous, myself :)
Am I completely off base and missing the
whole point here? Does anyone like my idea?
(it works very well by the way)Actually it should do, an object reference counter, is an old solution from which even predated CORBA.

Similar Messages

  • Active and Inactive partitions - need expert knowledge

    We are out of space on CCM ver 8.5.1 and we are about to do what Cisco says is needed.  We are going to run an iso and re-install 8.5.1, which will formet the drive and all existing data gets overwritten.  Then we will restore the current backup.
    Our inactive partition has ver 6.something.  Can we make that partition the active one, install 8.5.1 over it, then restore the backup to there?  My goal is to preserve our current active 8.5.1 in case anything goes wrong and we will have a place to go back to.  We've asked the Cisco tech every which way and don't think he is understanding what we are asking.

    That would be just futile, if the common partition is full, you'll probably face the same issue when you try to upgrade to 8.5, the common partition won't clean up and free space just because you switch to the previous version.
    You have a few options here:
    A) Use RTMT to clean as much log files and any other unnecessary files as possible
    B) Use the ciscocm.free_common_space_v1.1.cop.sgn and lose the ability to switch back at that point
    From README
    Important Note: This COP file does not install anything on the node. It just runs a script that
    removes the inactive side in the common partition to free up the disk space so that upgrade is
    successfully completed. You will not be able to switch back to the inactive version after installing this patch.
    C) If this is on ESXi, use the ciscocm.vmware-disk-size-reallocation COP file to increase the disk size and then try the upgrade.
    D) DRS restore on a fresh install as they instructed.

  • Re: What is a Business Process Expert?

    Hello all,
    Maybe we should open the discussion a bit beyond a "Human Centric" BPX. The main theme after all IS "The Business Process". There are hundreds, possibly thousands of Business Processes depending upon whether you categorize as micro or macro processes. Any one of these either on a micro/macro level function with or without human beings and may utilize machines that assist or automate value added activities/processes which produce higher value service or products in the age old scenario [input > process > useful output & waste output].
    When we talk about Expert or Expertise, we are talking about precise know-how, and narrow domain knowledge wether depth or breadth knowledge.
    Referring back to a post by Luis Rincones, regarding ideas and concepts leading to BPX, automation of the business process should be the KEY focus of a BP engineer. Back in the late 80's early 90's, we referred to these BP engrs. as "Knowledge Engineers" and their principal goals were to interview "Experts" in order to extract expert knowledge (sometimes 20+ years of experience) and understand not only the flow of the process, but more importantly the RULES which govern all exceptions and variations encountered within the process, as well as "rules of thumb" in case the RULES don't apply.
    This is something which if we are truly to orient our Enterprise SOA architecture towards the goals of BPX mentioned above, will require a common "Rule Based Engine", which lives across all domains of knowledge, Objects, & environments, and works together with the design & development environment for the xAPP (I like to refer to them as "Expert Applications") and thereby enable not only the integration and homogenization of input data (via the technology stack), but also integrate the application of logic via a common rule based engine. In this way, E-SOA can also be enabled on the "useful" outbound process enabling services (actions) which mediate/manage/automate the process back to prescribed operating parameters.
    Of course, people can not be eliminated from the process. Lexus factory in Korea back in 2002 operated with approx. 60 or so people, 150 robots, and produced >300 units per day. People were mostly involved in QA and deviation functions.
    Automating the Business Process optimizes the output:input ratio and should minimize the waste:useful output ratio. By doing so, the ROI, ROA, and operating margins are maximized and competitive advantage gains against other not so able competitors.
    Any one else has a few words on this?

    You raise a very important and interesting question regarding whether there is any framework for narrowing down the domain of a Business Process Expert.
    There are many frameworks out there:
    http://www.bpmg.org/8omega.php
    http://www.prosci.com/tutorial-design-mod1.htm
    http://www.intel.com/technology/itj/2004/volume08issue04/art11_collaboration/p01_abstract.htm
    From my experience, it is beneficial to focus on a critical "life support" process for the corporation.  Many times this can be started at the top level COO for instance, asking what is the # 1 thing which causes issues with customers, collections, services, etc..  One such item is "contract to cash" and specific sub-components of this process such as "unbilled hours" which if corrected prior to billing issuance will result in more accurate Invoices, and collections. An improvement of this nature shortens the cycle time of billing to collection, or hour entry to billing calculation, and with improved accuracy increases cash flow and lowers cost per transaction.
    What you are suggesting in terms of Media expert, etc.. is more associated with Roles which are aligned with Solution Scenarios within SAP.  Business Processes may span many roles, and are critical to the application being developed.
    I believe this is the purpose of BPM, but would welcome any further comments from colleagues regarding framework.
    Please take a look at the following BLOG as well as Lakshmikanth does a very good job explaning the landscape:
    /people/lakshmikanth.adiraju/blog/2006/09/19/soa-bpm-business-applications-netweaver-happy-customer
    Message was edited by: Fermin Iduate

  • Hobbyist i7 build, comments please

    This is my first post on this forum and I apologise for its length but felt I should explain my choices.
    I found this forum and dvinfo.net in my research to build a new PC. This forum in particular has been of immense help. The expert knowledge, experience and opinions so generously given will, I hope, have saved me from making expensive and frustrating mistakes.
    I have read and re-read a number of the threads here and finally (I think) have put together my shopping list. I would very much appreciate if interested members could take a look and point out/suggest any flaws/alternatives before I place my orders.
    I'm an enthusiastic hobbyist and will use my new PC for photo-, audio-, SD video editing, DVD creation (family movies), family web site build and general PC stuff - so will not be video editing/encoding all the time. It will replace my present PC (an AMD 3500+, 3GB RAM, XP Pro that I built 5 yrs. ago) which is really struggling now. Photoshop & Premiere Elements 8 and Vegas MediaStudio 9 have proved it's no longer up to the job - 5 yrs. ago it was brill. Despite all the time and money invested, I'm sure that  in 5 yrs. time my new build will be no different.
    So having set the scene this is what I've come up with:-
    I don't want or feel that I need a RAID setup as I image backup my desktop data every day to my NAS and image backup my C drive weekly. I'm fully aware of the advantages of the appropriate RAID config but I've always been concerned about what happens when one of your matched drives dies and you can no longer get that model. I'm not seeking to build a dedicated editing machine to use for commercial purposes.
    I've never overclocked but with the kit I plan to get I intend to have a careful go using posts on here, particularly Harm's as a starter.
    I have chosen three different makes of HDD to spread my risk and carry out my own small reliability evaluation.
    The OS of my new system will be Windows 7 Home Premium 64-bit.
    My shopping list is:
    CPU                    Intel i7-920 - OEM version because I won't be using the stock cooler.
    MOBO                Gigabyte GA-EX58-UD5 - because it has sufficient SATA, eSATA and Firewire ports.
    RAM                   6GB (3x2GB) Corsair XMS3 DDR3, PC3-12800 (1600), CAS 8 (8-8-8-24), XMP (TR3X6G1600C8) - I'd like 12GB but I'm already over my original budget. If I decide to stretch to 12 (well it is nearly Christmas) how much of a risk I am   taking with them not actually being a guaranteed matched set (a 12GB matched set is considerably dearer - price has gone up £40 since yesterday)?  From experience I know it is best to get all the RAM you think you want in the first instance because trying to find matching RAM later is a pain and can be impossible. I can't see this RAM in the mobo's QVL so I'm hoping it'll be OK.
    RAM Cooler         I don't know if this is reqd. for 6/12 GB? For this amount aren't the heat spreaders on the RAM sufficient?
    PSU                     Corsair 750HX PSU - as advised by Harm, I've run my build through eXtreme Power Supply Calculator Pro 2.5 to find the power needs of this lot plus possible additions. The 750 still has amps for un-anticipated add ons.
    CPU Cooler          Noctua NH-U12P SE 1366 - seems it will do a nice job and the price is bearable - a bit concerned about case fit though.
    Thermal Paste     Artic Silver 5 - seems to be a favoured compound.
    Northbridge Fan  AKASA AK-VCX-01 40x40x10mm - not sure how I'm going to attach this to my mobo yet.
    GPU                     ATI HD5770 - I was going to get a 4890 but this looks a more suitable card re future.
    OS & Apps HDD   500 GB Western Digital WD5001AALS Caviar Black - this should be fast enough and has lots of space
    PF/Cache HDD    500 GB Western Digital WD5001AALS Caviar Black - as above.
    Genl Data HDD    500GB Samsung HD502HJ Spinpoint F3 - for anything that isn't video, photo or audio.
    Proj Files HDD     1TB Seagate ST31000528AS Barracuda 7200.12 - should be big enough and fast enough. Archive on NAS.
    Media HDD          1TB Seagate ST31000528AS Barracuda 7200.12 - as above.
    DVD-RW              Samsung SH-S223B/RSMN (Retail) - I chose this over a Sony Optiarc (OEM).
    2 SATA Caddies   Antec Easy SATA - to house my Project and Media HDDs so that when I'm not media editing I can unplug them to stop them wastefully spinning away creating heat, wearing out and using power.
    Case                    Coolermaster Storm Sniper All Black Mesh Edition (SGC-6000-KXN1-GP) - it was hell of a job finding the right case. This isn't perfect but ticks most of my boxes. It's an improvement on the original Sniper, is spacious, should have good airflow and has the external front ports I want (USB, eSATA & Firewire) (and a handy button to switch off the bling LEDs). The new Zalman MS1000-HS1 was a contender with its front hot-swap bays but it's so new there are no useful reviews on it and its predecessor has poor airflow reviews. The Antec P183 is very nice but with my caddies the door would be a nuisance and it doesn't have the Firewire port. Ditto the Firewire port missing from the Antec 900-2. Had it been the eSATA missing from the either I could have lived with it because the caddies have eSATA ports.
    Just to complete the picture, because my desktop capacity is to be increased I will add another drive as follows to one of my NAS devices to increase the archive/backup space.
    NAS HDD             ST3200542AS - 2TB Seagate ST32000542AS Barracuda LP
    I'll be using my existing mouse, keyboard and recently purchased monitor (Dell 2209WA 22").
    Sorry this has been so long but I thought it would answer some questions that might have be asked - on the other hand it could be so long people forget what the question was.
    Thanks (if you stayed with me)
    Tony

    Thanks to you both for having the tenacity and time to give me feedback. Your suggestions were very useful as they caused me to revisit my planned workflow. My response is, I'm sorry, a little long winded but I want to show I've given your suggestions full consideration.
    Harm,
    Thanks for the praise but I only followed the advice you and a few other stalwarts of this forum have already given to many others.
    Thanks also for the comments re the compound and Northbridge cooler - that released a tenner to my 12GB RAM fund.
    When I looked back at the many notes I'd taken from the many posts I've read I found the Nthbridge cooler had come from the first post in reply to a question on Tom's Hardware Forum (www.tomshardware.co.uk/forum/page-260467_12_0.html) and was about another mobo. I've read that much about overheating that it has me quite concerned to cool it so I included it. If you've never had a problem that's good enough for me so I've dropped it.
    I noticed your suggestion elsewhere about the Supermicro cage but didn't follow up then. I've had a look now and that bit of kit looks very good and I almost opted for it instead of the caddies. However, I've thought about the practicalities and have decided to stick with the caddies because:-
    (a) if I could foresee at some point my needing 4 or 5 hot-swap drives for concurrent projects then I would definitely have gone for the Supermicro cage - but I can't. I'm only going to be concentrating on one project at a time so I can archive the previous one, clear the drives and when the next one comes along start afresh. My projects run serially, not in parallel.
    (b) the case has a front eSATA port that I'll connect to one of the two eSATA connectors on the mobo - thus one of the eSATA ports on the mobo bracket will not have a supply (not a problem because I can't see a need for me to have two eSATA devices permanently plugged in to the back of the machine). This config will give me an eSATA port back and front providing the flexibility of a permant rear connection and temporary front connection should I need to 'do a quickie' in parallel. The caddies also provide eSATA ports giving me two more hot-swaps should the need arise.
    Jim,
    Re general PC stuff: my plan was to replace my old PC with a new one and apply an updated worklflow config. I'd already considered keeping my current PC but I don't have the room space. However, your central point (which I know from posts is shared by Harm and others) of keeping the AV editing PC lean and clean is well made and on reflection I am able to do.
    In my 'command module' where I have my boys toys I spin my chair to the right for my desktop and to the left for my iMac. I currently have WinXP Pro running in Boot Camp on my iMac, accessed through Parallels Dektop. In due course I'll be replacing XP with Win7. That is where I will concentrate all my non-AV activities, leaving my new build exclusively committed to AV.
    No longer needing an HDD for general data is going to release another £38 for the RAM fund. More funds (£32) can also be released for the fund because there is no need for the caddies - the machine will only be on when AV work is being done. Nice one! I owe you both a drink.
    Re northbridge: it's a 920 on an X58 (Bloomfield) mobo I'm getting, Jim, not an 860/870 on a P55 (Lynnfield).
    Re DVD-RW: My iMac I has a Optiarc AD-5630A which is why I thought I'd have a different make in my new build. Thus giving me flexibility/options.
    Re: KVM switch: Yes I'm familiar with these but your earlier suggestion lead to this being unneccesary.
    I'd appreciate either of you, or anyone else, commenting on the risk/merits of buying:-
    2 x 6GB Corsair XMS3 DDR3 PC3-12800 (1600), CAS 8 kits, or
    1 x 12GB Corsair Dominator DDR3 12800 (1600) CAS kit.
    At the moment the 2 x 6GB is £254 and 1 x 12GB is £304 from the same supplier. So it's a £50 premium for the matched set. It must be the safest buy but is removing the risk worth the extra £50?
    Many thanks again,
    Tony

  • CREATION OF SET FOR PURCHASE ORDER NUMBER

    Dear Sirs,
    I want to create a basic set containing all purchase order number, even though i am using table MSEG and field EBELN, i am not getting and values in it. how to create a basic set containing all the available PO numbers, which can be used in FI validation.
    Give me the full setting to create a above mentioned set.
    Thanks and Regards

    i didn't get what u are saying please elobrate. let me put forth my issue clearly.
    my client want to check where there is any PO available are not, for payments exceeding 1,00,000/-
    So i am cofiguring a validation, for which i want to keep the  "set of PO numbers" in "CHECK" part of the validation. which i am unable to do. when ever i try to select the set it is saying no valid object found.
    SO please help me with ur expert knowledge in this regards
    Thanks & Regards

  • Control Systems Engineer

    Control Systems Engineer
    Seattle Safety – Kent, WA 98032
    Seattle Safety is looking for a qualified individual to fill an opening for Control Systems Engineer. Seattle Safety designs, manufactures, and installs advanced crash test sled systems that are used in automotive and aeronautical industries. The duties of the Control Systems Engineer include:
    • Experience designing, testing, and optimizing industrial automation and control systems utilizing modern and classic control methodologies such as PID. Experience with neural networks.
    • Strong background in mathematics, physics, and signal analysis.
    • Electrical / Electronic design experience including automation.
    • Design, troubleshoot, and test software written in LabVIEW 8.6 & 2012 and C.
    • Help customers diagnosing electrical, software, and mechanical problems with their systems. This will sometimes involve working odd hours for customers on the other side of the planet.
    • Support installations of crash test equipment at on-site locations worldwide.
    • Provide round the clock technical support for team members locally and abroad in subject matters concerning performance, installation, and maintenance of software and data acquisition hardware.
    • Prepare and maintain software flowcharts, layouts, and diagrams as needed to demonstrate solutions to outside staff.
    • Work with current team members to restructure existing crash test software.
    • Supports all aspects of software application design, development, testing, deployment, and support.
    • Perform software testing at a unit and integration level to ensure expected behavior.
    • Comment code clearly and consistently throughout the development process.
    • Maintain professional relationships with suppliers and vendors in order to keep up with industry developments.
    • Candidates must be located in the US, the northwest would be ideal.
    Furthermore, the ideal candidate would possess the following skills:
    • Advanced or expert knowledge of LabVIEW.
    • Advanced or expert knowledge of C.
    • Expert knowledge of industrial controls.
    • Familiarity with a minimum one low-level programming language (C#, VB, Ladder, etc)
    • Familiarity with data acquisition concepts and hardware.
    • Discipline and organization with respect to software maintenance and version management. Experience with source configuration management tools a plus (CVS, ClearCase, Perforce, etc.)
    • Experience with sophisticated feedback control systems.
    • BSEE, BS Physics, and BSME are preferred but not required depending on experience.
    • Ability to work both alone and with colleagues to solve problems and to weigh the merits of differing approaches.
    Pay is commensurate with skills and qualifications of the applicant.

    Dear Sir/Mam,
    I’m particularly interested in the position of Control System Engineer, which relates strongly to my more than six years of experience in designing various applications for testing and automation. I am Certified LabVIEW Associate Developer & prepared for CLD also. Currently I am leading a team for the project for Machine Automation and remote diagnostics, and I believe I meet all the essential criteria of the position. My work at my current organization has been rewarding and productive. However, I wish to expand my career further, into the application designing and development role. The position also has a definite correlation with my practical knowledge and experience. You’ll see from my CV that I have been deeply involved in designing and development of various applications based on Programming techniques, hardware and network designing. I feel that I am well qualified to make an effective and useful contribution to design good application which will be market acceptable.
    I’m enthusiastic about the chance to participate in a meaningful role with an industry leader in the field.
    Thank you for your consideration of my application. Please contact me should you require any further information,
    Yours sincerely,
    Mohit Monga
    Mohit Monga
    Attachments:
    Mohit Monga_CV.doc ‏159 KB

  • How do I fix a FINDER crash problem in 10.6.8

    I recently, in the last 10 days, started having a FINDER crash problem with my iMac 24" (iMac9,1) Intel Core 2 Duo 3.06 GHz computer.
    I have 4 GB of ram installed, no external hard disks installed.
    It seems to crash when I have five or more applications open, but it is not consistant.
    When it freezes or crashes, I get the spinning ball. I open the Force Quit Window with comand, option, escape and it show Finder not responding.
    I click on Finder and tell it to relaunch. It does nothing. At that point I cannot even go to the APPLE icon on the dock and do a shutdown or restart.
    From that point on, I have to press and hold the START button to shut the computer down.
    I have checked the hard disk integrity with Disk Utility after starting on the Snow Leopard 1.6 install disk. everything shows good.
    I have repaired permissions using the same application. I then restart the computer and log back in.
    I have used Onyx for Snow Leopard version 2.3.1 to do some maintenance routines.
    I have checked the S.M.A.R.T. status of the hard drive and it passes.
    I have verified that the Daily, Weekly, Monthly routines have been run. They have.
    I have executed the Cleaning tab to include the following cache files:
    System: Boot
                   International Preferences
                   QuickTime Components
                   Audio Components
                   Other Components
    User Cache: Applications
                             Prefernces of System Panels
                             Java & Applets Java
                             Desktop Background
                             International Preferences
                             Dock Icons
                             ColorSync
                             QuickLook
                             Temporary Items
    Internet Cache: Browser Cache
                                Download Cache
                                Browser History
                                Recent Searches
                                Web Page Previews
    Fonts Cache:  System and User
    Logs Cache: Log Files
                            System Archive Logs
                            User Diagnostic Reports
                            System Diagnostic Reports
                            Mobile Devices CrashReporter
    Misc. Cache: Recent Items
                            Recent conversion of Calculator
                            Obsolete Items
                            QuickTime Content Guide
    Under Maintenance Items:
    Rebuild:
                   LaunchServices
                   dyld's shared cache
    Short of taking this computer, packing it up, and hauling it down to the Apple Store for the 'Genius' to poke around in it, I am at a brick
    wall as to what to do next. (I am useing this same computer to enter this message).
    Here's hoping someone on the forum will have an answer.
    Douglas J. Parker
    P.S. Does APPLE Technicians ever read these discussion questions and interject their 'Expert' knowledge?

    /Library/Preferences is where you will find that file.
    #2 - yes

  • Oracle 9.2.0.7 vs Oracle 10.2.3 with COMPATIBLE parameter set to 9.2.0.7

    Hi,
    First off, I am NOT an oracle DBA (or even a DBA!), so please excuse my ignorance or errors contained within this post.
    My client is currently planning to upgrade their 9.2.0.7 database server to 10.2.3. Reading through the oracle upgrade documentation, I came accross the compatible parameter, and further more you can set it to 9.2.0.7. I realise that by setting this, many new features introduced in Oracle 10g will not be available - however getting of the unsupported 9.2.0.7 platform is priority.
    Can someone please advise what are the differences between a real 9.2.0.7 database server versus a 10.2.3 (with 9.2.0.7 compatible) server. By setting this will it guarantee sql scripts and external applications continue to work after the upgrade? I.e the list of compatibility issues detailed in the oracle database upgrade guide, will these issues be made irrelevant by this setting?
    Your expert knowledge is appreciated.
    Regards,
    Shan

    When upgrading from 9i to 10g the compatible parameter should be set to 9.2.0. All of the new feature will only be available when the parameter is set to 10.0.0 or higher and the instance restarted.
    Once an instance has been restarted with a compaticble value of 10.0.0 it can never be set back to 9.x.
    Why do you think the upgrade is required if you still want to use the features of 9.2.0.7?

  • Subreport displays "..Missing parameter values" when published

    I would like some expert knowledge on the issue i'm currently having with a transaction report that I've developed. I have one main report with all the db tables linked.
    I have two parameters field defined, one is part id, the other is date issue.
    I have 3 report footers defined: 1=issues subreport, 2=adjustments subreport, 3=cycle counts
    The subreports has a subreport link for Part id and one for the date issue, date adjustment, and date cycle counted. I've also created in each of the subreports a date parameter field or else these data will not display on the primary report correctly.
    When i run this report in my Crystal Reports XI pro software, it works fine, all transactions are captured and everything is great.
    When i publish this report onto the Crystal reports enterprise server, after entering the two values on the main report, the report errors with [COMException: Missing paramter values].
    I know that there is a subreport linkage problem that i have or I'm doing something backwards but any suggestions would be greatly appreciated.
    Regards,
    Linda

    Thank you for the response Sharon but it is still not working for me. Here is more details if anyone has any feedbacks to give.
    Main report:
    Has 5 db tables linked, a counts table, a parts table, a detail table, a receipts table, and a adjustment table.
    Parameter fields on the Main report:
    1 for part id
    1 for date issue range <this range should match to the other 3 sub-reports>
    Sub-Reports: <example>
    Receipts sub
    Adjustment sub
    Count sub
    Example: The report footers where these subreports reside, all their subreport links are:
    partid --> ?Pm-<tablename>partid with check in the select data in subreport based on field <tablename>partid
    receivedate --> ?Pm<tablename>receive date with no checkbox
    same for other 3 = this is the only way the data displays correctly.
    All sub-reports have the following parameter fields:
    Pm-<tablename.Part>
    Receiptdate
    Pm-<tablename.receipt date>
    Pm<tablename.part>
    adjustmentdate
    Pm<tablename.adjustment date>
    Pm<tablemname.part>
    countdate
    Pm<tablename.count date>
    When i press the refresh on the main report, all of the date ranges comes up, i enter the value in all of the ranges, and the data displays accurately and correctly.
    Because the 3 prompts in the sub-reports are not on the main report, i think this is where the problem when i publish the report on the server it errors with missing parameters.
    Thank you

  • Printer Compatibility Issue with PDF?

    I have a PDF document someone else created with InDesign CS3 5.0.4.  It opens in Reader 9.1.3 and prints perfectly on some newer printers.  However on older printers the very bottom of the document is chopped off in both print preview and the actual print on paper.  Both printers are HP and I spent a couple hours chatting with HP Tech Support trying new drivers, etc.  Their conclusion was that it is caused by an unresolvable issue between the PDF and the older printer.  They, of course, said I would need to buy new printers eventhough they are not that old and work just fine for everything else.
    So far the same issue has come up in three other offices. There must be a better solution other than throwing money at it.
    I have never used InDesign and I am wondering if there could be some settings used at the PDF creation in either InDesign or Acrobat that may solve the problem.  The form must be printed exactly as designed so Shrink to Fit is not an option.
    Thanks.

    Thank you for answering so quickly!
    I agree, and I am afraid the answer to this issue is to either buy new printers or redesign the form.
    The PDF is a new state government form and I want to exhaust all other options before I get involved with the bureaucracy of inquiring about a change to the design of the form.  (A stick in the eye sounds more fun.)
    So from your expert knowledge of InDesign and PDFs there is no way, short of redesigning the form, to have it print correctly on "older" printers?
    Thanks again.

  • Time-Out for Process with Correlation - Avoiding blocked Queues

    Hello,
    I have the following requirement, it would be great if you could share your opinions and knowledge on it:
    If have an integration process which opens a correlation and sends out an asynchronous request message. The response normally arrives after 10 minutes and is correlated to the process using a correlationID that has been activated by the request message. Additionally there is a deadline branch which terminates the process after 1 hour.
    So far this works fine.
    However there might be some very rare cases where a response is delayed (e.g. if there is maintenace work on partner side). So it could be the case that a response arrives a day later when the process instance is already terminated:
    My questions are now:
    1. How can I avoid that the inbound queue is blocked with the response message that cannot find an active correlation?
    2. Best thing would be to store the response message on the file system in case no more active correlations are open.
    So basically the logic could be:
    If message arrives, check whether process instance with corresponding correlation is active, if yes then send to process, if no send to file receiver.
    Is this possible?

    Thread closed - no solution so far.

  • Need Help: Workflow Decision Task not visible in NWBC for customer Task

    Hello All,
    I am  new to NWBC configuration , need ur expert knowledge  for the below issue. I have checked the treat . Approve/Reject button for SRM Approval for RFx response ,but not able to get the exact steps for the configurations in NWBC.
    Requirement.
         We are working on SLM/SRM module, where we have requirement to create a custom workflow with custom Task for which we have executed below actions.
    Created the Custom Workflow.
    Created the custom Task with custom  ZCLASS and ZEVET.
    Created   Event linkages in SWE2.
    After doing this we are getting the workitem in the NWBC, currently we don’t have the Portal system in place, so we are using the NWBC for all our testing.
    What is the issue?
         When we select the workitem in the NWBC (UML) inbox  , the Decision Task ie “Approve” or “Reject” button are not visible , where as for SAP standard  Task  buttons are visible.
    What help we need?
      When we cross check the SCN/IBM portal, found that we need to do the XML file configurations. Can you any one kindly let us know the steps we need to follow to achieve our functionality.
    Thanks a lot in advance.
    Thanks and Regards
    Channa

    You need to share the details of what Inbox you are using, configuration steps depend on it. If you aren't using portal, you aren't using UWL and there is no UML inbox. You need to get your facts straight. Most likely you are using the Business Workflow Inbox (SWF_WORKPLACE) or the Lean Inbox (IBO_WDA_INBOX). Both are based on Web Dynpro ABAP and POWL but the configuration is different. Regardless, you posted this in the wrong space. Correct space is either SAP NetWeaver Business Client or Web Dynpro ABAP.

  • Lost disk space after each reinstall.

    I've had to reinstall leopard on my macbook three times in the last week for different reasons. After each reinstall(erase and install) it appeared that I lost a little more GB to the HDD. I have a 250GB HDD, and I know it actualls reads as 232.87, but without uploading any media and only using the reinstall disks that came with the box, I have only 218 or so GB. Is this normal? I can't find the numbers anywhere; they just don't add up.

    Anthony Turtzo wrote:
    I can't find the numbers anywhere; they just don't add up.
    My guess is that you have added up the sizes of the folders visible in Finder at the root level of the drive (Applications, Library, System, & Users) & compared that total to the total space used on the drive. The two numbers will not match because there are also hidden files & folders at the root level of the drive that Finder does not normally show. The hidden items are necessary for the proper operation of the Mac & in normal use are managed by the system. Since users should not tamper with them directly without good reason & it requires expert knowledge to do so without breaking anything, they have been made invisible in the Finder view.
    It is normal for the drive space used by these items to fluctuate, & for the total to grow somewhat after first installing the OS. Some of the items contain temporary files that are created & destroyed as needed. Some contain usage, crash, & other informational logs that are managed by routines that the system automatically runs periodically to keep their total from becoming too large. Others contain initialization data gathered over time that the system uses to start up or to run processes efficiently.
    Thus, there is no fixed correct number for their total size since that depends on the history & current state of the machine. On my iMac G5, one large hidden folder ("var") currently uses about 1.05 GB of drive space, but consider that only a ballpark number since it may be considerably different on different Macs at different times.
    If you are still concerned about the health of the drive, especially that something is eating up disk space unnecessarily, then run Disk Utility to check it. In the "First Aid" tab select your drive & click the "Verify Disk" button. If this reports no problems then you can be pretty sure there is nothing to worry about.

  • Optimizing an SQL Query using Oracle SQL Developer

    Hi ,
    Currently i am using Oracle SQL Developer as my Database IDE .
    Is it possible to use Orqcles SQLDeveloper for the purpose of Optimizing an SQL Query ??
    For example assume i am having a query as :
    Select from Tranac_Master where CUST_STATAUS='Y' and JCC_REPORT='N'*
    Could anybody please tell me how can i use Oracle SQL Developer to optimize this query or any other SQL queries ??
    Please share your ideas , thanks in advance .

    1. Your query looks very simplistic as it is, so I fail to see how you can better optimise it (unless 'Tranac_Master' is a view, in which case I'd need to see the view details).
    2. No tool can automagically optimise your SQL to any degree of practical use. Very minor adjustments may be possible automatically, but really it is a question of you knowing your data & database design accurately, and then you applying your expert knowledge to tune it.

  • Same standard SAP prog is taking longer time in new version than older ver?

    Hello
    The below scenario is as batch job.
    IDOC processing standard SAP FM (which is designed and working on the Parallel processing technique) is taking 45 min, in release 3.1 SAP system and if we run the same variant (same amount of input data and rest of the stuff also same) in ECC6.0 verson, its taking 5 hours!
    Actually, i do not have log on ID for 3.1 release, hence i can not debug it. Pls. let me know What is the good and best approach to analyse this bad perdformence? is it ST05? or i should debug the prog and watching EACH STEP by using F5/Single step debugging?
    Thank you

    Hello,
    please refer to the blog of Hermann Gahm about ST12 transaction and use it to trace and analyze the results.
    Here is the article how to trace a running batch job: /people/hermann.gahm/blog/2009/09/22/st12--the-workprocess-trace
    But in order to really optimize something you'll need expert knowledge.
    Kind regards,
      Yuri

Maybe you are looking for