Any ideas on best approach?

I need ideas on how to best compare a users input against my randomly generated characters. ie does the users input of say, "abc", match what the computer has generated? how can i return the information to the user?

if (computerGenerated.equals(userEntered)) {
    System.out.println("They match!");
else {
    System.out.println("Computer had " + computerGenerated);
}Mark

Similar Messages

  • Ideas on best approach - Product Catalogue, Orders to appear in a UWL

    I am looking for some guidance/ideas on how best to approach the following:
    I have a request to create a product catalogue (only to be accessed internally), that will allow staff to order products.
    Once they have selected the products and quantities that they want, they will submit the order, and the the order should appear as an item in the UWL for a user. This user will then fill the order, and ship the goods to the user that ordered them.
    I'm not sure which technology I would be best to use here, so any suggestions on this would be gratefully received.
    Thanks in advance
    Regards
    Richard

    Has no-one done anything like this?
    Has anyone done a product catalog?

  • My boss wants to be able to use his phone as an employee directory, any ideas on best route

    Something he could also access on his iPad and Mac computer would be the best. He's looking for a way to look up and recognize his staff of over 300.

    I would use Address Book and create a new group labled either "Work" or "Employee List"
    I'd like to think he can use a spreadsheet to import contact data but I am not sure on this part.

  • Best approach to "migrate" from BEX reports to Webi reports ?

    Hello,
    i have read lots of documents regarding best practices on how to built webi reports and universes on top of BW.
    But i can't find any document about best approach, not in performance way of thinking but in best way of using reports.
    i mean: when end users are coming from bex reports (where they can drill down through hierarhies and use free filters ) to webi reports (where layout is quite beatiful and user can change it easely), this is not the same way of consuming reports.
    I come from BO world and are new on reporting on top of BW.
    for me webi is good for quite static layout reporting where data is clear and available.of course you can have prompts for interactivy and more accurate reporting.  Drill down is just a functionality but is not the real purpose of the report tool.
    So ,according to me there is a gap between both tools (BEX and WEBI) but end users are the same.
    So i 'm wondering if you have any feedback for the best approach to build webi reports where end users are coming from bex reporting.
    And how to choose between prompts, drill-down (with available filters on top of the window), fold/unfold and input controls or just having diffrent levels of hierarchies in the table/ section/ breaks but without drill down (because if you drill down, report begins weird with diffrent levels) ...?
    So , if you have any feedback , advise....
    thanks in advance,
    Rgds,

    Hi,
      WEBI don't replace BEX reports, is for different audience, in fact BEX is for OLAP reports and analysis.
      You can find some answer in this page
    [FAQ: The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI & Business Objects Roadmap|FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]|FAQ]
    spercific for What is the future of the BEx Query Designer? you can read here
    [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section11] and here [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section3]
      The idea is to use the rigth tool for the rigth job.
      You can find more information here [http://www.sdn.sap.com/irj/sdn/edw], [http://www.sap.com/solutions/sapbusinessobjects/index.epx], [http://www.sap.com/solutions/sapbusinessobjects/newsevents/index.epx], [http://www.sap.com/community/flash/BusinessIntelligenceAGuideforMidsizeCompanies.pdf]
    I hope this help you.
    Best regards.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • HT1338 My MacBook Pro (running Leopard 10.5.8) won't allow keyboard to type an upper case 'C' using the shift key...works fine with caps lock, or, with my Typinator workaround using double-typed lower case c (not always best). Any ideas?

    My MacBook Pro keyboard won't type an upper case 'C' using the shift key... only with caps lock. Workaround has been to use Typinator by typing a double lower case c, not always the best solution. Mac is a refurbished model, which initially was fine. Apple tech helped me correct the quirk when it first appeared, but now it has returned and refuses fixes. Any ideas?

    I'm willing to bet that this has something to do with iCloud.  I've been facing a frozen computer nearly every time I go into it. 
    In Activity Monitor I have seen two programs associate with iCloud that take up about 2.5 GB of memory (I only have 4 in the computer) -- causing everything to freeze.  One of these is called "iCloud Helper" and the other one is something like "Address Book Sync helper" -- I see parts of these names in the stuff you have posted.
    I am at my wits' end with these and have written to Apple asking them to do away with these or fix them.  To get rid of the Address book sync thing I have gone into system preferences for iCloud and unchecked Contacts -- but then spontaneously it gets rechecked.  And oftentimes after I Force Quit the iCloud helper, it spontaneously turns on again.  My conclusion --- the iCloud is basically unusable.  It turns a Mac into the most useless, clogged, sluggish PC.  If this is happening to a lot of people's computers -- and I see no reason why yours or mine should be an exception -- these programs just might destroy Apple itself. 
    So -- I'm about to completely give up on iCloud, and I suspect that others will too unless this gets fixed.
    Cheers,
    Bob

  • HT4796 any idea how long it takes for migrate process. Is it the best way to transfer old computer files to your mac?

    Any idea how long it takes for migrate process to complete. Is it the best way to transfer file from your pc to mac?

    If you are using Migration Assistant with a WiFi connection expect 12+ hours, 24 hours is possible.
    If you are using hardwired network connections expect 4 to 8 hours.

  • Best way to show a "kiosk type" video done with final cut pro. Any ideas?

    Hi,
    Was hoping to get some pro opinions (no pun intended) on the best way to send my edited final cut vid (png format for super clear text) to 32 inch monitors that will play them in the "store". We have 4 tvs, and they all need to show the video, and the sound (music) accompanying the video. The video is around 8 hours long (bunch of edited video clips to music) and will have to loop somehow. It's much too long for a dvd, as I want to keep the quality primo, so I was considering either a video ipod, or an actual computer. Just the ipod is so mall and frail (lol not really but around the people I work with it would be), and a computer might be overkill. Any ideas or opinions would be highly appreciated.
    Thanks.

    http://www.wdc.com/en/products/index.asp?cat=30
    http://store.apple.com/us/browse/home/shopipod/family/appletv

  • Best 3G camera App? anyone got any ideas of what to get

    Well im now running the 3.0 but some camera apps are awaiting confirmation from apple about there updates.
    anyway, does anyone know what is the best camera app on the app store to get for the 3G.
    as i know we would all like the functions of the 3GS's camera.
    so if anyones got any ideas? id very much appreciate it

    Hi ...
    Try here >  How to Troubleshoot iSight

  • R/3 4.7 to ECC 6.0 Upgrade - Best Approach?

    Hi,
    We have to upgrade R/3 4.7 to ECC 6.0
    We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
    We have been considering the following approaches (but need to understand the risk for each approach).
    1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
    I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
    2) DB in 1st step and then Unicode and R/3 together after 2-3 months
    Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
    3) DB and R/3 in 1st step and then Unicode much later
    We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
    We are on single code pages 1100 and data base size is about 2 tera bytes
    Thanks in advance
    Regards
    Rahul

    Hi Rahul
    regarding your 'Unicode doubt"' some ideas:
    1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
    2) In Unicode@SAP can you find several (content-mighty) FAQs
    Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
    Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
    "...The Unicode conversion depends on the customer situation....
    ... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
    - If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
    In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
    => Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
    3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
    Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
    regards, and HTH, Andreas R

  • Any ideas on how to do a local mirror for this situation?

    I'm starting a project to allow ArchLinux to be used on a Cluster environment (autoinstallation of nodes and such). I'm going to implement this where I'm working right now (~25 node cluster). Currently they're using RocksClusters.
    The problem is that the connection to internet from work is generally really bad during the day. There's a HTTP proxy in the middle. The other day I tried installing archlinux using the FTP image and I took more than 5 hours just to do an upgrade + installing subversion and other packages, right after an FTP installation (which wasn't fast either).
    The idea is that the frontend (the main node of the cluster) would hold a local mirror of packages so that when nodes install use that mirror (the frontend would use this also, because of the bad speed).
    As I think it should be better to only update the mirror and perform an upgrade not very often (if something breaks I would leave users stranded until I fix it), I thought I should download a snapshot of extra/ and current/ only once. But the best speed I get from rsync (even at night, where an HTTP transfer from kernel.org goes at 200KB/s) is ~13KB/s this would take days (and when it's done I would have to resync because of any newer package that could have been released in the meantime).
    I could download extra/ and current/ at home (I have 250KB/s downstream but I get like ~100KB/s from rsync), record several CDs (6!... ~(3GB + 700MB)/700MB) but that's not very nice. I think that maybe this would be just for the first time. Afterwards an rsync would take a lot less, but I don't know how much less.
    Obiously I could speed things a little If I download the full ISO and rsync current using that as a base. But for extra/ I don't have a ISOs.
    I think this is a little impractical (to download everything) as I wouldn't need whole extra/ anyways. But it's hard to know all packages needed and their dependencies to download only those.
    So... I would like to know if anyone has any ideas on how to make this practical. I wouldn't wan't my whole project to crumble because of this detail.
    It's annoying because using pacman at home, always works at max speed.
    BTW, I've read that HOWTO that explains how to mount pacman's cache on the nodes to have a shared cache. But I'm not very sure if that's a good option. Anyway, that would imply to download everything at work, which would take years.

    V01D wrote:After installation the packages that are in cache are the ones from current. All the stuff from extra/ won't be there until I install something from there.
    Anyway, if I installl from a full CD I get old packages which I have to pacman -Syu after installation (that takes long time).
    Oh, so that's how is it.
    V01D wrote:
    I think I'm going to try out this:
    * rsync at home (already got current last night)
    * burn a DVD
    * go to work and then update the packages on DVD using rsync again (this should be fast, if I don't wait long time after recording it)
    And to optimize further rsync's:
    * Do a first install on all nodes an try it out for a few days (so I install all packages needed)
    * Construct a list of packages used by all nodes and frontend
    * Remove them from my mirror
    * Do further rsync updates only updating the files I already have
    This would be the manual approach of the shared cache idea I think.
    Hmm... but why do you want to use rsync? You'll need to download the whole repo, which is quite large (current + extra + testing + community > 5.1GB, extra is the largest). I suggest you to download only those packages and their dependencies that you use.
    I have similar situation. At work I have unlimited traffic (48kbps at day and 128kbps at night), at home - fast connection (up to 256kbps) but I pay for every megabyte (a little, but after 100-500 megabytes it becomes very noticeable). So I do
    yes | pacman -Syuw
    or
    yes | pacman -Syw pkg1 pkg2 ... pkgN
    at work (especially when packages are big), then put new downloaded files on my flash drive, then put them into /var/cache/pacman/pkg/ at home, and then I only need to do pacman -Sy before installing which takes less than a minute.
    I have 1GB flashdrive so I can always keep the whole cache on it. Synchronizing work cache <-> flash drive <-> home cache is very easy.
    P.S.: Recently I decided to make complete mirror of all i686 packages from archlinux.org with rsync. Not for myself but for my friends that wanted to install Linux. Anyway I don't pay for every megabyte at my work. However it took almost a week to download 5.1 GB of packages.
    IMHO for most local mirror solutions using rsync is overkill. How many users are there that use more than 30% of packages from repos? So why to make full mirror with rsync when you can cache only installed packages?

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • Best approach to add Z custom field to IC Agent Inbox search and results view

    Hi Experts,
    We are having a requirement to add a Z custom field to IC Agent Inbox search and results view. I got multiple forums and ideas, but looking for the best approach for handling this. I am sure, you experts, would have already done this.
    Thanks in advance.
    Regards
    Siva

    Hi Sivakumar,
    AET is the best way by far to create a custom field in this area. It is easy and simple.
    Also, field once added in one business object it can be used at different objects as well.
    There is also a demo available for AET on sdn.
    Please let me know if any more help is required.
    Thanks,
    Bhushan

  • Best approach for syndication in Central MDM

    MDM 7.1
    CE 7.2
    ERP 6 EHP4
    PI 7.1 EHP1
    We are currently developing a custom application using CE/BPM workflow for central maintenance of customer master data. One of the topics under discussion is the right approach for syndication once a record is complete.
    [This |http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60a3118e-3c3e-2d10-d899-ddd0b963beba?quicklink=downloads&overridelayout=true] SAP document on collaborative material master data creation provides one way to achieve this syndication by first calling a web service from BPM to create record in ERP before checking in MDM. While I am personally fine with the approach, some of the other colleagues aren't too keen on issuing synchronous calls from BPM. Rather, they would like to use the syndication engine of MDM to transmit data to downstream systems (currently only SAP ERP) using Idocs. But there is a caveat here. To use syndication, the record has to be checked in.
    The problem is that if the record is checked in MDM, it is ready for modification. However, the asynchronous call to ERP using Idocs for creation of customer master might fail for any number of reasons. In this case, the MDM record might need a modification before resubmitting to ERP. In the meantime, since the record was checked in before syndication, someone else might have checked it out, potentially resulting in data quality issues. So to avoid this situation, the developer has decided to take the approach to check in -> syndicate -> check out -> wait for confirmation Idoc -> check in if success. This isn't a clean approach to syndicate but might address the record locking issue.
    Another consideration is to design the application with the view that sometime in the future, this master data might have to be syndicated to other SAP and non-SAP systems as well. To ensure syndication to all downstream systems is complete before checking in MDM can be a tricky requirement and might need some complex ccBPM development or evaluating something similar to two-phase commit (might be an overkill). In any case, a best practice approach for keeping downstreams systems in sync with MDM in case of central MDM has to be shared by SAP. So it would be good to have comments of the people who developed the reference application for collaborative material master data creation.
    If there are any customers who have come up with a custom solution which works, please do share the experience.
    Thanks and regards,
    Shehryar

    Thanks Ravi. While there are more than one possible solutions to the immediated problem, I am actually looking for a design pattern which SAP recommends or a customer has developed to address the issues related to synchronization of master data in a Central MDM environment.
    The idea behind a central master data management function, as you know, is that all participating business systems use the same basic master data being authored in MDM. This data has to be synchronized with all participating systems, rather than just one system. To me, a ccBPM workflow or 2 phase commit design pattern seem to be the solution. But it would be good to know how other customers are addressing the issue of master data synchronization with multiple systems, or SAP's recommendations for this issue.
    Regards,
    Shehryar

Maybe you are looking for

  • Error while posting document in FS-CD

    Hello, When trying to post a document in FS-CD, I am getting the following error. Can someone please guide as to what could be the reason. Central settings not maintained Message no. >4071 Procedure for System Administration Check and where necessary

  • Problems with printing MS Office docs from web-mail in Safari

    Hello, I have some problems when I try to print MS Office documents which are the attachments in my web-mail in Safari. I open the attachment online without saving it on my computer. Safari shows a blank page in Print Preview and the printer prints a

  • How to fetch data from nested internal table

    Hi, Im using FM CRM_PRODUCT_GETDETAIL_API which is returning me work area (ES_PRODUCT_DATA) of type COMT_PRODUCT_MAINTAIN_API. This work area contains a table SHORT_TEXTS of type COMT_PR_SHTEXT_MAINTAIN_TAB whcih in turn contain a line type DATA of t

  • Read data from SharePoint using ABAP sql command

    We need to read data from a SharePoint site and update the sap object services with the information. There is a lot of information on how to put data into Sharepoint from SAP, but we need to get data from SharePoint and put it into SAP. Is it possibl

  • Oracle Help for the Web 2.0 Production

    Oracle Help for the Web 2.0 is now available on OTN Downloads. Oracle Help for the Web (OHW) 2.0 is a major update to the previous releases OHW 1.1x and OHW 1.0.x, and is designed and recommended to replace both. Some of the new features in 2.0 inclu