Best Practices To Size the Datafile on ASM in Oracle 11gR2

Hi DBAs,
I need to know that what should be the datafile next extent on ASM. I have created a database 11gR2 with ASM. Now some certified DBA told me to create the datafile as small as possible and make it auto extend with the next extent size 1M. The expected data growth every month is about 20-25 Gig.
Please let me know what are the best practices while for sizing the datafiles and extents in 11g using ASM. Any good document/note refer would be great help.
Thanks
-Samar-

Hi Samar,
I need to know that what should be the datafile next extent on ASM. I have created a database 11gR2 with ASM. >Now some certified DBA told me to create the datafile as small as possible and make it auto extend with the next >extent size 1M. The expected data growth every month is about 20-25 Gig.
Please let me know what are the best practices while for sizing the datafiles and extents in 11g using ASM. Any >good document/note refer would be great help.I don't think there is any kind of recomendations for that, at least, I've never seen. As You probably know, ASM already divides the ASM files (any sort of files written in a ASM diskgroup) in extents and those extents are composed of AU. So, doesn't matter the datafile size, ASM will balace the I/O on all ASM disks in a diskgroup.
I guess this thread ( AU_SIZE and Variable Extent Size ), even not being the subject of Your question, Will help You.
I do believe there are more important factors for ASM performance, like stripe size, LUN size, storage configuration, etc.
Hope it helps,
Cerreia

Similar Messages

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

  • Forums : Best practices to FAX the BI publisher reports

    What are the best practices to FAX the BI publisher reports in Oracle Applications?
    Also please let me know the advantages and disadvantages using CUPS printer/RightFax Server.
    Thanks for your help.
    Liyakath

    You can use any of the methods you have listed to create XML data. Oracle seeded code is using PLSQL to generate XML rather than reports since reports technology will not be used in the next version of Application (aka Fusion)
    Srini

  • BEST PRACTICE TO PARTITION THE HARD DISK

    Can some Please guide me on THE BEST PRACTICE TO PARTITION THE[b] HARD DISK FOR 10G R2 on operating system HP-UX-11
    Thanks,
    Amol
    Message was edited by:
    user620887

    I/O speed is a basic function of number of disk controllers available to read and write, physical speed of the disks, size of the I/O pipe(s) between SAN and server, and the size of the SAN cache, and so on.
    Oracle recommends SAME - Stripe And Mirror Everything. This comes in RAID10 and RAID01 flavours. Ideally you want multiple fibre channels between the server and the SAN. Ideally you want these LUNs from the SAN to be seen as raw devices by the server and use these raw devices as ASM devices - running ASM as the volume manager. Etc.
    Performance is not achieved by just partitioning. Or just a more memory. Or just a faster CPU. Performance planning and scalability encapsulates the complete system. All parts. Not just a single aspect like partitioning.
    Especially not partitioning as an actual partition is simple a "logical reference" for a "piece" of the disk. I/O performance has very little do with how many pieces you split a a single disk into. That is the management part. It is far more important how you stripe and whether you use RAID5 instead of a RAID1 flavour, etc.
    So I'm not sure why you are all uppercase about partitioning....

  • Best practices for making the end result web help printable

    Hi all, using TCS3 Win 7 64 bit.  All patched and up to date.
    I was wondering what the best practices are for the following scenario:
    I am authoring in Frame, link by reference into RH.
    I use Frame to generate PDFs and RH to generate webhelp.
    I have tons of conditional text which ultimately produce four separate versions of PDFs as well as online help - I handle these codes in FM and pull them into RH.
    I use a css on all pages of my RH to make it 'look' right.
    We now need to add the ability for end users to print the webhelp - outside of just CTRL+P because a)that cuts off the larger images and b)it doesn't show header, footer, logo, date, etc. (stuff that is in the master pages in FM).
    My thought is doing the following:
    Adding four sentences (one for each condition) in the FM book on the first page. Each one would be coded for audience A, B, C, or D (each of which require separate PDFs) as well as coded with ONLINE so that they don't show up in my printed PDFs that I generate out of Frame. Once the PDFs are generated, I would add a hyperlink in RH (manually) to each sentence and link the associated PDF (this seems to add the PDF file to the baggage files in RH). Then when I generate my RH webhelp, it would show the link, with the PDF, correctly based on the condition of the user looking at the help.
    My questions are as follows:
    1- This seems more complicated than it needs to be. Is it?
    2- I would have to manually update every single hyperlink each time I update my FM book, because I am single sourcing out of Frame and I am unable (as far as I can tell) to link a PDF within the frame doc. I update the entire book (over 1500 pages) once every 6 weeks so while this wouldn't be a common occurrence it will happen regularly, and it would be manual (as far as I can tell)?
    3- Eventually, I would have countless PDFs inside RH. I assume this will eventually impact performance. So this also doesn't seem ideal?
    If anyone has thoughts/suggestions on a simpler way or better way to do this, I'd certainly appreciate it. I have watched the Adobe TV tutorial on adding a master page but that seems to remove the ability to use a css across all my topics and it also requires the manual addition of a manual hyperlink to the PDF file, so that is what I am proposing above, anyway (not sure the benefit, therefore).
    Thanks in advance,
    Adriana

    Anything other than CTRL + P is going to create a lot of work so perhaps I can comment on what you see as drawbacks to that.
    a)that cuts off the larger images and b)it doesn't show header, footer,
    logo, date, etc. (stuff that is in the master pages in FM).
    Larger images.
    I simply make a point of keeping my image sizes down to a size that works. It's not a problem for me but that doesn't mean it will work for you. Here all I am doing is suggesting you review how big a problem that would be.
    Master Page Details
    I have to preface this with the statement that I don't work with FM. The details you refer to print when they are in RoboHelp master pages. Perhaps one of the FM users here can comment on how to get FM master pages to come through.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • Best practice to integrate the external(ERP or Database etc) eCommerce data in to CQ

    Hi Guys,
    I am refering to GEOMetrixx-Outdoors project for building eCommerce fucntionality in our project.
    Currently we are integrating with an ERP system to fetch the Product details.
    Now I need to store all the Product data from ERP system in to our CRX  under etc/commerce/products/<myproject> folder structure.
    Do I need to create a csv file structure as explained in the geometrixx-outdoors project  and place it exactly the way they have mentioned in the documentation? By doing this the csvimporter will import the data in to CRX and creates the Sling:folder and nt:unstructured nodes in to CRX?
    Please guide me  which is this best practice to integrate the external eCommerce data in to CQ system to build eCommerce projects?
    Are there any other best practices ?
    Your help in this regard is really appreciated.
    Thanks

    Hi Kresten,
    Thanks for your reply.
    I went through the eCommerce framework link which you sent.
    Can you get me few of the steps to utilise eCommerce framework to pull all the product information in to our CRX repository and also  how to synchronise between the ERP system and CRX data. Is that we have a scheduling mechanism to pull the data from our ERP system and synch it with CRX repository?
    Thanks

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Best Practice for enhancing the SAP delivered standard WD ABAP application

    Hi,
    I am new to WebDypro ABAP.
    To enhance the SAP delivered Standard WebDynpro Component (complex component with Business objects & powl).
    Kindly let me know the best practice for enhancing the Standard WD ABAP from the below 1 or 2.
    1) To copy & create a "Z" of the component & make changes in that (or)
    2) to enhance directly on the same standard component without making "Z".
    Regards,
    NS

    Hi NS,
    If it is a standard component its better we go for enhancing the component rather than copying it into Z component.
    If there is any issue with in the standard component , SAP supports it through notes and OSS messages. If it is a Z component, SAP doesn't support it.
    If there is any up gradation of business packages, changes will be done to standard , but not the Z components, wherein we could miss it.
    Further, since it is a standard component it might have been used at many places, changes that has to done to reflect all changes might be difficult in this case if it is a z component.
    Regards,
    Harsha

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practices for using the knowledge directory

    Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • Best practices for using the 'cost details' fields

    Hi
    Please could you advise us to the best practices for using the 'cost details' field within Pricing. Currently I cannot find the way to surface the individual Cost Details fields within the Next Generation UI, even with the tick box for 'display both cost and price' ticked. It seems that these get surfaced when the Next Generation UI is turned off, but cannot find them when it is turned on. We can see the 'Pricing Summary' field but this does not fulfill our needs, as some of our services have both recurring and one-off costs.
    Attached are some screenshots to further explain the situation.
    Many thanks,
    Richard Thornton

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • What are the best practices to extend the overall lifespan of my MacBook Pro and its battery?

    In general what are the recomended practices to extend the lifespan of my batter and other general practice to extend the lifespan and characteristics(such as performance and speed) like new on my MacBook Pro which this past fall (2011)?

    About Batteries in Modern Apple Laptops
    Apple - Batteries - Notebooks
    Extending the Life of Your Laptop Battery
    Apple - Batteries
    Determining Battery Cycle Count
    Calibrating your computer's battery for best performance
    MacBook and MacBook Pro- Mac reduces processor speed when battery is removed while operating from an A-C adaptor
    Battery University
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility.  For situations DU cannot handle the best third-party utilities are: Disk Warrior;  DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. Drive Genius provides additional tools not found in Disk Warrior.  Versions 1.5.1 and later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep.  Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced since Tiger.  These utilities have limited or no functionality with Snow Leopard or Lion and should not be installed.
    OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems. For more about malware see Macintosh Virus Guide.
    I would also recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack.  If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line.  Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. There is no confirmation that this version also works with Lion.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    Carbon Copy Cloner
    Data Backup
    Deja Vu
    SuperDuper!
    SyncTwoFolders
    Synk Pro
    Synk Standard
    Tri-Backup
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at CNet Downloads or MacUpdate.
    Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity as free space.

  • Best practice to handle the class definitions among storage enabled nodes

    We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
    Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

    Is it the cache servers themselves or your application servers that are having problems with loading classes?
    In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
    Anyhow a simplistic cache class loader may look something like this:
    import com.tangosol.net.CacheFactory;
    * This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
    * to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
    * It is probably a good idea to decide on some convention for how binary names are structured when stored in the
    * cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
    * "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
    * a specific application are stored in the same partition and this way can be updated atomically by a processor or
    * transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
    * application will only involve one cache node!
    public class CacheClassLoader extends ClassLoader {
        public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
        private final String classCacheName;
        public CacheClassLoader() {
            this(DEFAULT_CLASS_CACHE_NAME);
        public CacheClassLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public CacheClassLoader(ClassLoader parent, String classCacheName) {
            super(parent);
            this.classCacheName = classCacheName;
        @Override
        public Class<?> loadClass(String className) throws ClassNotFoundException {
            byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
            return defineClass(className, bytes, 0, bytes.length);
    }And a simple "loader" that put the classes in a JAR file into the cache may look like this:
    * This class loads classes from a JAR-files to a code cache
    public class JarToCacheLoader {
        private final String classCacheName;
        public JarToCacheLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public JarToCacheLoader() {
            this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
        public void loadClassFiles(String jarFileName) throws IOException {
            JarFile jarFile = new JarFile(jarFileName);
            System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
            for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
                final JarEntry entry = entries.nextElement();
                if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                    final InputStream inputStream = jarFile.getInputStream(entry);
                    final long size = entry.getSize();
                    int totalRead = 0;
                    int read = 0;
                    byte[] bytes = new byte[(int) size];
                    do {
                        read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                        totalRead += read;
                    } while (read > 0);
                    if (totalRead != size)
                        System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                    else
                        System.out.println(entry.getName().replace('/', '.'));
                        CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                    inputStream.close();
        public static void main(String[] args) {
            JarToCacheLoader loader = new JarToCacheLoader();
            for (String jarFileName : args)
                try {
                    loader.loadClassFiles(jarFileName);
                } catch (IOException e) {
                    e.printStackTrace();
    }Standard disclaimer - this is prototype code use on your own risk :-)
    /Magnus

  • What are the best practices for using the enhancement framework?

    Hello enhancement framework experts,
    Recently, my company upgraded to SAP NW 7.1 EhP6.  This presents us with the capability to use the enhancement framework.
    A couple of senior programmers were asked to deliver a guideline for use of the framework.  They published the following statement:
    "SAP does not guarantee the validity of the enhancement points in future releases/versions. As a result, any implemented enhancement points may require significant work during upgrades. So, enhancement points should essentially be used as an alternative to core modifications, which is a rare scenario.".
    I am looking for confirmation or contradiction to the statement  "SAP does not guarantee the validity of enhancement points in future releases/versions..." .  Is this a true statement for both implicit and explicit enhancement points?
    Is the impact of activated explicit and implicit enhancements much greater to an SAP upgrade than BAdi's and user exits?
    Is there any SAP published guidelines/best practices for use of the enhancement framework?
    Thank you,
    Kimberly
    Edited by: Kimberly Carmack on Aug 11, 2011 5:31 PM

    Found an article that answers this question quite well:
    [How to Get the Most From the Enhancement and Switch Framework as a Customer or Partner - Tips from the Experts|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0f0373e-a915-2e10-6e88-d4de0c725ab3]
    Thank you Thomas Weiss!

Maybe you are looking for

  • Resolution problem with pdf presentation from CS6 in Bridge.

    I'm having exactly the same problem as this guy but on CS6 http://forums.adobe.com/message/4352008. "I am consistently having problems creating multipage pdf files using Bridge; though they are created using 300 dpi files in Photoshop CS6 the resulti

  • Classpath Problem (Using JSAPI)

    Hi, I using IBM Via Voice as a the implementation of the JSAPI on my system. I've downloaded the speech for java pack from IBM Alphaworks and put it in the directory of c:\ibmjs This gives you access to the speech packages for utilisation in your pro

  • Partial divestiture : transaction type

    Dear SDNers, I have a simple question concerning the Partial divestiture activity : Is it normal that the system doesn't post with "Disposal" transaction type when we have a Partial Divestiture ? Is it possible to change this behaviour ? Many thanks

  • Creating Custom Tables in the Portal Database

    Is there a way to create custom tables in the portal database in EP6? Thanks, Melissa

  • HT4995 please read and help if you are able!!!!

    i just realized that i might have left my phone at the counter of  a store after a very looonngg week at work, something i have never done since i purchased my first iphone in 2007. the store is now closed, although in my town, my chances of recopver