Best practice for retail store cycle count

Hi experts
Would you have any experience/best practice in designing for cycle count process for your clients?
By cycle count, I mean for frequent counting (daily) for specific item category / exceptional count request.
My client want to implement a cycle count process to minimize the effort for full stock take which happen around 3 times per year.
They are using SAP as the core system to keep track of inventory, and using external store system, so some considerations would be on, for example:
- whether the count variance or counted quantity shall be interfaced back to SAP
- whether SAP should be the source to initiate a stock count process or POS initiate
It would be very nice if you can share some best practice as in the process flow, system information flow, etc.
Best regards
Dominic

Hello
Both can be done but as you say SAP is the core system at least for tracking inventories, better create and integrate inventories is SAP aswell.
The main advice I would give is to avoid specific tables for inventory integration because standard tables and specific won't be updated well sometimes and support would be needed, stick on standard transactions.
If the inventory is centralised in SAP, you can for example count them with PDA, send to SAP connectors your counting which will be integrated from a xml file transformed in a Idoc for SAP.
One parameter is important to consider as the freeze for inventory. You have to decide if you take into account any stock movement that could occur after your inventory creation or not. If you do, the variance after the counting will depend on the initial stock value and also the movement.
If you don't, the variance will only take into account your initial stock value withotu considering the stock movement.
Hope it helps you,
Génia.

Similar Messages

  • Best practices for app store developer download

    We use company-owned macs to develop iOS applications. We frequently need to access the app store to download free apps (such as Xcode and competitor's apps), but the developers don't want to enter their personal iTunes credentials on company macs to do this. The company does not want to enter their credit card to create a company-wide iTunes account for fear of weird charges ending up on their card. Furthermore, the reason Apple has logins to the app store is to tailor the experience to the individual, so having a company login for many people does not seem to be the way to go. What is the best practice for this? Does iTunes have an "Organization" level object and that entity can add iTunes users to it like the "Developer Teams" aspect for Apple developers?

    I haven't done this, so I haven't solved the problem as such. But those organizations who I've seen mention it either just get free apps via this process:
    http://support.apple.com/kb/HT2534
    or use a corporate credit card with the accounts. You can use a single credit card for all the accounts, to the best of my knowledge. There's also a Volume Purchase Plan for businesses which can simplify matters:
    http://www.apple.com/business/vpp/
    I believe that a redemption code obtained through this program can be used to set up an iTunes Store account, but I'm not certain.
    Regards.

  • Best Practice for  Retail BP-ERP05/BP-INSTASS

    Hi SAP Guru's
    We have installed ECC 6.0 SR2 with the BP Baseline and Inst Asst
    BP-ERP05     600VD   0000
    BP-INSTASS   600V1   0000
    I have applied these add-ons by SAINT for Retail  and now what's my question
    1)There is need to upgrade any component in server which is related to these addons
    2)How to check Retail pre configuration
    I think after applying these component , retail pre configuration should come in the server but it is not coming
    so is there some settings/process for getting Retail pre configuration.

    HI,
    These two components are required to call the installation assistant from SAP to implement best practices blocks.
    Now you have to call the transaction /n/SMB/BBI and from there you need to call upon the blocks that are appropriate for Retail, you need to download the required best practice blocks and related document, since you have to follow the sequence to call the pre customized modules, suitable for you scenario from market place.
    Follow the document provided along with the software or download from market place.
    Regards....
    Raju.

  • Best practice for declaring and initializing String?

    What is the best practice for the way Strings are declared in a class?
    Should it be
    private String strHello = "";
    or should I have the initialization in the constructors?

    The servlet constructor is usually called once, when the servlet is first accessed. But then again maybe something else happens, google servlet life cycle if you must know.
    But let's take a step backwards here. It seems like you are trying to put fields into servlets. Don't do that. When two users fetch the servlet's URL at the same time, the fields are shared between the two hits. If you store something like HTTP parameters in the fields, the two hits' parameters will get mangled. The hits can end up seeing each other's parameter values.
    The best way is not to have fields in servlets. (Except maybe "static final" constants, sometimes rarely something else.) Many concurrency worries go away, servlet life cycle worries go away, servlet constructors go away, init() usually goes away.

  • Query Best Practice for Reports

    I am new to Apex and I am wondering what is the best practice for store your sql quries for reports.  I am a believer of storing all sql behind pacakge functions or procedures.  And it looks like the only options for report pages are to use a direct SQL query, or a function that returns a query as a string.  Yes the function method counts as putting the code in Oracle but not really.  It is still getting compiled and parsed on the Apex side.  It would be nice if Apex could handle a cursor but I have read that it doesn't directly. You have to have a function that returns a cursor and then create a pipelined function that calls the cursor function.  That is kind of silly.  Is there some other way to do this?
    Apex 4.2
    Oracle 11.2.0.2
    Thanks for any input.
    Jeff

    Hi Jeff,
    I'm not necessarily a believer in packaging queries. I'm a little more pragmatic in that I believe it may make sense in environments where you have a client environment that just expects a result set that is then manipulated by the client for the purposes of presentation, pagination etc. Apex has a different architecture in that the client is purely an HTML presentation layer (browser) and the presentation, pagination etc is formulated in the database along with the data using the Oracle web toolkit, which is a set of internal packages that produce HTML. Note that handling and manipulating ref cursors inside PL/SQL is not a joy, they were mainly designed to be passed out to external clients. (Often to shield programmers who don't or won't even try to understand relational concepts)
    This means that when you create a report based on a query, the Apex engine will manipulate that base query, depending on the display requirements and pagination requirements of your report, before it submits that query to the database for execution. To get an idea of how this manipulation occurs, you can run your report in debug mode and check the actual query that is submitted to the database. If the query is presented as an already executed ref cursor, then the Apex engine can't execute in the way that it does. As you have already found out, the only way of using packaged queries returning ref cursors is by the use of a pipelined function, so that the Apex engine can treat the result as a normal query.
    This is the architecture of Apex, and I suspect that re-engineering the Apex engine to handle ref cursors natively, as opposed to using a pipelining trick, would be a considerable change. I hope this at least helps to explain why ref cursors and Apex don't mix. I personally don't see the purpose of having an abstraction layer of packaged queries below an abstraction layer of an API such as Apex. SQL is a perfectly good API.
    Regards
    Andre

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

  • Best practices for office 365 SHARED CALENDAR for whole school / organization

    hi
    we need guidance on best practice for setting up SHARED CALENDAR on Office365 exchange server for entire organization (school)of150 staff.
    Requirements
    + all staff should have read only / reviewer permissions on calendar
    +handful staff should have editor permissions on calendar
    + the calendar should synchronise custom categories and colors
    Current Solution
    at the moment we have found that a shared mailbox is the best solution because;
    - allusers can add the shared mailbox on outlook 2010as additional mailbox as readonly
    - all the categories & colors for the calendarare automatically synchronised because the color categories are stored within this mailbox.
    - you can edit calendar permissions in outlook to allow some users as "editor" of the calendar.Problem with Current Solution
    the problem however is that the users also need to access this...
    This topic first appeared in the Spiceworks Community

    Hi Aleksei,
    I think Inactive mailboxes in Exchange Online is the feature that you want. This feature makes it possible for you to preserve (store and archive) the contents of deleted mailboxes indefinitely.
    A mailbox becomes inactive when an In-Place Hold or a
    Litigation Hold is placed on the mailbox before the corresponding Office 365 user account is deleted.
    But I'm afraid that it might be impossible to "easily share certain folders or even whole mailbox with people in the company". As can been seen from below articles, this only allows administrators, compliance officers, or records managers
    to use the In-Place eDiscovery feature in Exchange Online to access and search the contents of an inactive mailbox:
    http://technet.microsoft.com/en-us/library/dn144876(v=exchg.150).aspx
    http://blogs.technet.com/b/exchange/archive/2013/03/21/preserve-mailbox-data-for-ediscovery-using-inactive-mailboxes-in-exchange-online.aspx
    Anyway, this is the forum to discuss questions and feedback for Microsoft Office client. For more details about your question, I would suggest you post in the dedicated forum of
    Exchange Online, where you can get more experienced responses:
    https://social.technet.microsoft.com/Forums/msonline/en-US/home?forum=onlineservicesexchange
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Regards,
    Ethan Hua
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • Best practice for keeping a mail session open in web application?

    Hello,
    We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
    Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
    Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
    I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
    Any pointers would be appreciated

    If you keep the connection open across pages, you're going to need to deal with
    timeouts - from the http session and from the mail server.
    If you don't keep the connection open, you're going to need to "resynchronize"
    your view of the store/folder with each operation, in case the folder is modified
    by another session.
    The former is easier in the common cases, especially if you don't care how gracefully
    you handle failures. The latter is more difficult in the common cases, but handles
    failure better, and in particular handles clustering better. You'll need to measure it to
    see if it meets your performance and scalability requirements. You may need to mix
    the two approaches to get acceptable performance.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for calc measures

    Hi,
    I read in one of the olap blog from experts which says that best practice for creating the calc measures is to have them in a seperate cube.
    My question is that suppose I have two cubes in a AW of different dimensionality. Both the cubes has some calc measures to be created.
    So can I create one more cube(dedicated to calc measures) considering all the available dimensions in the AW and create the calc measures for both the cube which has different dimensionality?
    If I cannot do this then this means that if I have two cubes having different dimensionality then I would need two more cubes to create the calc measures for them? if yes then what would be the size implecation of the AW?
    Thanks in advance.
    Thanks
    Brijesh

    "Can I create one more cube(dedicated to calc measures) considering all the available dimensions in the AW and create the calc measures for both the cube which has different dimensionality?"
    Yes, you can.
    This is a pretty common thing to do. Store your base measures efficiently and then use calculated measures with the 'superset' dimensionality to bring different shaped measures together for the purposes of calculations or simply to get them all in one 'hypercube' to simplify access by SQL query tools and apps.
    Kevin

  • Best Practice for module components based on API's

    Hi all,
    is there a white paper/other documents which outline best
    practice for using table/module component api's and best
    approaches to get around restrictions?
    I would also appreciate war stories about using this method of
    Forms development bearing in mind i would be using this as the
    foundation of a web deployed based application (intranet first -
    ultimately internet).
    Thanks
    Mark
    null

    You cannot add agents to skills dynamically; however, you can queue the caller into more than one CSQ based on statistics. You would use the Get Reporting Statistics and an If step within the Queued branch of your first Select Resource step. If the condition is met (e.g. Contacts Waiting >= 10) then exectue a second Select Resource step within the queued branch of the first. The contact would then be waiting in both CSQs waiting for an agent.

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

Maybe you are looking for

  • ChaRM: Session timed out at creation of "request for change"

    Dear all, I encountered a situation here. As I click on create an RfC using SM_CRM, no matter if I choose ZMCR or SMCR, there is nothing coming but a "you session is timed out" page. Therefore I can't create RfC at all. I can create SMCG / ZMCG witho

  • Regular Expressions in Java?

    I have written pattern matching in perl such as if($name =~ /^(test)(.+)$/o){ $drop = $2; Basically it looks like test1.8.8, test1.0.3, etc. What would be the equivalent of this in java?

  • Working with different versions of FCP

    My apologies if this is answered somewhere else but is there any way to work on a newer version of FCP then take that project to an older version? We have 5.1 at home but the workplace has 5.0. I'd like to do some work at home and am not sure how tak

  • PRICING_AMOUNT_DETERMINATION FM

    Hi, Can anyone explain wat this Function Module does??? I want to get the pricing for a material based on some customized condition types... Thanks, Vinotha M

  • Finder not showing every file

    I noticed today as I was trying to locate a folder on hardrive that Finder cannot find it. The folder in question is a library folder for an application and when i view it from within the application, i can see it and all of its contents. However, if