Environment and RAID best practices please...

hello dears...
I have a scenario that I've never been to and its a very important one, so that I would like to receive some tips of you experts...
I'll have to run Oracle 10.2.0.1.0 on Windows server 2003 ENTERPRISE edition on the following scenario:
A server that contains RAID10 with 6 Hard disks and a STORAGE that also contains a RAID 10 with 6 more disks...
How would be the best for Oracle for the best performance ? Split User data and index or let it to RAID10 and split redo and archives
Like that:
SITUATION 1Server:
USER DATA, TEMP TS
Storage:
USER INDEX, REDO and ARCHIVEs
SITUATION 2Server:
USER DATA, USER INDEX, TEMP
Storage:
REDO and ARCHIVES..
Are there any tips for that ? How the best way to split the files into this scenario ?
Any kind of help would be appreciated.
Regards

I would like to put UNDO with the User data rather<BR>
than Index..... pretty difficult to explain but will<BR>
be better i think and one more thing INDEX can be<BR>
with archives<BR><BR>
That's an interesting thought and makes sense. Are there any other whitepapers to endorse this view?<BR>
<BR>
I asked a RAID Disk Layout advice for Hybrid OLTP/Batch app earlier in regards to disk layout with an index closer to thet.<BR>
<BR>
I understand the reasons for separating archive logs and redo logs out, but I wasn't so sure about undo/data/index/temp. I've separated data/index obviously but I wasn't sure if dedicating an entire other RAID 1+0 pair for UNDO and a pair for TEMP seemed efficient. Oracle's whitepapers (see this one on S.A.M.E and this ASM-centric one) seem to point to a "stripe and mirror everything" approach to eliminate any bottlenecks by virtue of spreading absolutely as far as you can across as many spindles as you can. I'm uncertain at what 'threshold' this appears to take effect, as it seems to only apply to larger SANs and not disk arrays that are in the 10-15 disk range.<BR>
<BR>
Does anybody else have experience with this 'medium-sized' layout range?

Similar Messages

  • HTML and CSS Best Practices for Eloqua?

    Hello Topliners Community,
    My name is Ben and I am a Web Designer. I am currently looking for any guidance on HTML and CSS best practices when working with Eloqua. I am interested in the best practices for e-mail and landing pages.
    Thank you,
    Ben

    Personally I like to upload my custom created html/css into Eloqua instead of using the WYSIWYG.
    But if you must then right clicking on text boxes and click edit source is the way to go.
    There was a good discussion on editing your forms with CSS:
    Energize Your Eloqua10 Forms with CSS
    created by Ryan Wheler on Nov 2, 2012 8:44 AM, last modified by Greg Stotler on Sep 19, 2013 2:00 PM
    Version 2
    CSS can be used to heavily customize the layout of forms in Eloqua10.  In this article we will provide sample cover some common formatting use cases on Eloqua10 Landing Pages.  Further details about uses of CSS in Eloqua10 form templates can be found here: EE12 - Do It - Eloqua - Energize E10 Forms
    Eloqua10 Forms HTML Structure
    Below is an outline of the structure of the HTML generated by Eloqua when a form is added to a landing page.  By targeting the HTML classes highlighted below, we can control the layout of any form on your landing page.
      For the rest of page: http://topliners.eloqua.com/docs/DOC-3015

  • RAID best practice on UCS C210 M2

    H everyone,
    I  have a C210 M2 Server with LSI MegaRAID 9261-8i Card with 16 hard  drives of each 146GB.
    If I am going to run CUCM, CUC and CUPS to support up to 5000 users, What is the best practice to configure virtual drives?
    Do I need any hot spare drives?
    Thanks.

    Hello,
    Please go through following two links
    http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware#UC_on_UCS_Tested_Reference_Configurations
    http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Supported_Applications
    I am not sure whether your hardware configuration falls under TRC or specs based and can support those many users OVA for all apps.
    If it does, what I notice for RAID recommendation is
         2 drives for ESXi in RAID 1
         Rest of HDDs in RAID 5 
    If it is specs based, you can configure hot spare for RAID 5 volume.
    HTH
    Padma

  • Users And Security Best Practice

    Dear Experts
    I am designing an application with almost fifty users scattered in different places. Each users should access tables according to his/her criteria. For example salessam, salesjug can see only the sales related tables. purchasedon should access only purchase related tables. i have the following problems
    Is it a best practice to create 50 users in the DB i.e. 50 Schemas are going to be created? Where are these users normally created?
    or is it better for me to maintain a table of users and their passwords in my design itself and i regulate through the front end. seems that this would be risky and a cumbersome process.
    Please advice
    thanks
    Manish Sawjiani

    You would normally create a single schema to own the
    objects and 50 users to use them. You would use roles
    and object privileges to control access.Well, this is the classic 'Oracle' approach to do this. I might say it depends a bit on what you want to achieve. Let's call this approach A.
    The other option was to have your own user/pwd table. You can create your own custom authentication but I would go for the built-in Application Express Users - authentication scheme. You can manage the users via the frontend (Application builder > manage Application Express Users) . There you can manage the groups and end users which you can leverage in your Apex app. You can even use the APIs to create the users programmatically. It is all done for you. Let's call this approach B.
    Some things to consider:
    1) You want to create a web application and also other applications that access the data stored in Oracle (another PHP / Oracle Forms / Perl ) or allow access via SQL/Plus. Then you should use approach A. This way you don't need to reimplement security for these different approaches.
    2) You want to create one (or multiple) Apex applications only. This will be the only mechanism the users will access your data. Then I would go for approach B.
    3) When using approach A some users didn't like that all users will have access to their workspace, including the sql command line and having the capability of building applications and possibly being able to change the data they have access to through the Oracle roles. Locking down this capability is possible but it takes some effort and requires an Apache as a proxy.
    4) When using approach A you will need DBA privileges to manage the users and assign the roles. This might not always be possible nor desired. Depends on who will manage the Oracle XE instance.
    5) Moving the application including the end users to another machine is a bit easier using approach B since they are exported via the application export mechanism. Using approach A you would have to do it yourself. Be aware that the passwords are lost when you install the users into a different Oracle XE instance.
    6) If you design the application using approach B you will have to design security in a way that doesn't rely on the Oracle roles / grants security mechanisms. This makes it easier to change the authentication scheme later. For example, later you want to use a LDAP directory, a different custom authentication scheme or even SSO (SSO is not available out of the box but feasible). This is directly possible.
    Using approach A you would have to recode the security mechanisms (which user is allowed to update/delete which data).
    Hope that clarifies your options a bit.
    ~Dietmar.
    Message was edited by:
    Dietmar Aust
    Corrected a typo in (5): Approach B instead of approach A , sorry.
    Message was edited by:
    Dietmar Aust

  • Configuring AD Sites and Services best practice for multiple office site ?

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    I'd like to know more about the number and the direction of the connection between Domain Controllers in one site to the Data Center and vice versa.
    Thanks.
    /* Server Support Specialist */

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    This series can be useful:
    Active Directory Structure Guidelines – Part 1
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • Product and SWCV best practice

    We have a 3rd party Product  that tend to change product versions
    frequently as once in 3-5 month.
    as SAP Software logistics mechanisem is based on hierrachy of
    Product->product version->SWCU->SWCV
    My quesion is :
    what is the best way to maintain this Product versioning in the SLD and IR,
    To allow best practice of software logistics in XI and maintanance?
    Please share from you knowledge and expiriance only.
    Nimrod

    Structuring Integration Repository Content - Part 1: Software Component Versions
    Have a look at that weblog and also search for the following parts of the same. That should give you a good idea.

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • Oil and gas best practices - need information

    Colleagues!
    Help me to find some information about sap oil and gas best bractices.
    I am interested in .ppt presentation, word documents that in some way describe best practices for oil and gas industry.
    Thanks in advance for your help.

    Hi,
    Can you please check this link http://www.sap.com/industries/oil-gas/index.epx.
    Hope this helps you.
    Rgds
    Manish

  • Data Migration and Consolidation Best Practices?

    Hi guys
    Do you know what the best practice for data migration to FCSvr is? We’re trying to consolidate all media on various firewire/internal drives to a centralised RAID directly attached to a dedicated server. We’ve found that dragging and dropping a FCP project file uploads its associated media. The problem is that if there are several versions or separate projects linking to the same media, associated media is re-uploaded everytime! This results in that media getting duplicated several times. It appears that the issue is due to FCSvr creating a subfolder for every project file being uploaded which contains all the project’s media.
    This behaviour is not consistent when caching assets, checking out a project file, making changes and checking it back in. FCSvr is quite happy for a project file to link to media existing at the root level of the media device.
    We are of course running the latest version of everything. Hope you can help as we’re pulling our hair out here!
    Regards
    Gavin

    Hi,
    Do you really need an ETL Tool for these loading processes. Have you considered doing it in sql/plsql. If the performance of the application is one of the main priority I would definitely consider doing it in sql/plsql.
    Because of the huge amount of data and because your source and target systems are Oracle DBs I wouldn't recommend you to use Informatica.
    Also because source and target are Oracle DBs and it should be near real time you should have a look at Oracle Streams.
    Regards
    Maurice

  • FCP and HVX Best Practices ?

    Just looking for some suggestions on the easiest settings for using the HVX with FCP. I am not planning to shoot @ 1080 all the time and was just wondering what are best practices as it relates to work flow, stable capture settings for importing and any short cuts to save some time etc....
    Thanks !
    Andrew

    We always shoot 720p. This way you can get the Native format of 24PN, which saves tons of drive space, and still looks great. I output via the Kona 3 to a 1080p23.98 D5 master and all is good.
    There aren't any CAPTURE settings. Just Import Panasonic P2. Using SATA drives for media storage, either internal or eSATA drives or Raids will get you better performance.
    Click on Underdog to get to my blog about working with P2...
    Shane

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • What is the guideline and/or best practice for EMC setup on ASM?

    We are going to use EMC CX4-480 for ASM storage on RAC. What is the guideline and best practice for EMC setup on ASM?
    Thanks for the advice!

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Purchasing Group control - PO and PR - Best Practices?

    Hi experts,
      My client need to implement in the same plant, purchasing group for POs and indirect PRs.
      They currently have around 5-6 plants in Asia and the design is to group the purchasing group for the plant at the prefix level. Example A01 to AZZ for plant in Thailand, B01 to BZZ for plant in China.
       Thus we will have situation of A01, A02, A03 created for buyers and A04, A05 created for department to raise indirect PR.
       But the requirement is that the buyer should not use the purchasing group from the department (A04, A05) to create PO. Other than hard-coding the A01, A02 and A03 to the role to be given to the buyer, for Best practice, how should the purchasing groups be design in such situations?

    Hi Ravi,
              Thanks for your respond.
               My client won't have issue of buyer buying for other plants as the buyers are local buyers situated at the plant. They don't have a centralized purchasing team that purchased for mor than 1 plant.
                When you say purchasing group are added to the role, I presume you mean that buyer only have authorizaton for their own purchasing group? That mean buyer A01 only have authorization for A01 and nothing else?
                 When it come to maintenance, won't this be tough? In addition, buyers will not be able to back up each other of the same plant when anyone goes on leave. Our current plan is to give A* for buyers in Thailand plant. But that also mean I might have the prob of direct buyers accidentally using department purchasing group to create PO.

  • BI BPC server strategy and the best practice

    Hi  Everyone
    I would like to know couple of questions:
    1) Is any white paper or documentation on pros and cons of having BPC NW installed as a add-on to BW system wherein Planning and Reporting are taking place on the same BW system versus BPC as a separate system wherein is used primarily for Planning and Consolidation system only
    2) is there a Best Practice document and performance considerations for BPC development from SAP.
    appreciated for any answers.
    Regards
    AK

    Hi AK,
    both scenarios works well but for first scenario having BPC on top of exisiting BW reporting, you need to take special attention on sizing. As BPC requires additional capacity, you need to take care of it.
    Also if you have SEM component on your BW system, you need to check this Sap note: 1326576 u2013 SAP NW system with SAP ERP software components.
    And before you install BPC, it is recommended to have a quick test process once you upgraded to NW EhP1 (it is a prerequisites for BPC), to check the existing BW reporting process.
    regards,
    Sreedhar

Maybe you are looking for

  • Relation between 'only balance in local currency'  check box and OBA1-KDF

    Hi, in the master data of the GL account, the check box 'only balance in local currency' should be unchecked it to set that account as Valuation account, i.e in the OI exchange rate defferences? or , if the account takes only in local balance also se

  • Background image and CSS?

    I am building a page to resemble our companies website.  Right now my page is linking to some external images.  I think this maybe affecting how my background looks.  If you look at this link: http://www.ournewspace.com/melanieshefchik You will see a

  • Missing calendar events in my iphone 4S

    My icloud calendar and PC calendar are complete for 2012. My iphone has no events for Jan - May - they just disappeared recently. How can I get either my PC or icloud calender to send to my iphone?

  • New to idocs and ale

    dear friends , i am new to ale and idocs . i was practicing creating idocs . For eg say i wanted to send an SD invoice to a third party system . How would i go about starting the process . What are the things i need to take care of to start with like

  • Deleting a remote file

    Hi !, I have to delete a file in a remote FTP server and I'm using the sendServer ("DELE" + file); command but my problem is that I'd like to know If the file has been really deleted or if any problem has occurred How can I do it ?? Thanks! TLLI