Regarding Data or Log Management

Hi
Does SAP HANA support housekeeping jobs..
Suppose Datas and logs extraced from ERP or BW  will be stored in HANA DB for years.
How to clean the old datas or logs from HANA after some period of time.?
Does SAP HANA has any way to go for this...in general practise?
Regards
Magalingam

Hi  all
Thanks for replying.
I understand there is active and passive data store to prioritize the data access.
In best practise, how we do archive in HANA?  how to handle the obsolete data?
can you please elaborate bit more about this or refer documents etc.
I could not see any archiving in HANA under HANA portal other than technical operation guide.
Regards
magalingam
Edited by: Magalingam on Mar 8, 2012 1:52 AM

Similar Messages

  • HOW TO VIEW DATA IN LOG VIEW - PROCESS CHAIN?

    HAI FRIENDS,
    I SIMULTANEOUSLY LOADED DATA INTO ODS AND INFOCUBE....I USED THE PROCESS CHAIN TO LOAD DATA.. BUT AFTER ACTIVATING, I COULD N'T VIEW THE DATA IN THE LOG.AFTER THAT I FOUND OUT DATA IS NOT LOADED INTO THE ODS ..SO I ACTIVATED IT AGAIN.. DID THE PROCESS CHAIN ACTIVATE.. EVEN AFTER THAT I COULD NOT VIEW DATA IN LOG VIEW AND ALSO WHEN I CLICK ON LOAD DATA PROCESS(ODS) NO DATA IS FOUND.. PLS HELP OUT
    WITH REGARDS
    APPU

    Hi,
    Go to the manage screen for that ODS. See teh relevent request and find how many records were transferred and how many added.
    If it is zero added records, then check for any rules.
    Arun
    Assign pts if useful

  • Event ID 2115 A Bind Data Source in Management Group

    Hi All,
    I have some issue's with Microsoft System Center Operations manager 2012 R2. The issue is the following:
    A Bind Data Source in Management Group XXXXXXX has posted items to the workflow, but has not received a response in 2100 seconds.  This indicates a performance or functional problem with the workflow.
     Workflow Id : Microsoft.SystemCenter.CollectPerformanceData
     Instance    : XX.XXX.XXX
     Instance Id : {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
    I've restarted all SQL services on the Ops SQL server (separated from the OPS console)
    Another issue what we're seeing in the Ops console event log: Workflow Id : Microsoft.SystemCenter.CollectAlerts and Workflow Id : Microsoft.SystemCenter.CollectDiscoveryData
    Can you help me with this issue's?
    Kind Regards
    Niels Mennen

    Please check the following
    1) Whether the Operations Manager or Operations Manager Data Warehouse databases are out of space or offline
    2) whether your SDK action account is correct especially the password is correct.
    Event ID 2115 messages can also indicate a performance problem if the Operations Manager Database and Data Warehouse databases are not properly configured. Performance problems on the database servers can lead to Event ID 2115 messages. Some possible causes
    include the following:
    •The SQL Log or TempDB database is too small or out of space.
    •The Network link from the Operations Manager and Data Warehouse database to the Management Server is bandwidth constrained or the latency is large. In this scenario we recommend to Management Server to be on the same LAN as the Operations Manager and Data
    Warehouse server.
    •The data disk hosting the Database, logs or TempDB used by the Operations Manager and Data Warehouse databases is slow or experiencing a function problem. In this scenario we recommend leveraging RAID 10 and we also recommend enabling battery backed Write
    Cache on the Array Controller.
    •The Operations Manager Database or Data Warehouse server does not have sufficient memory or CPU resources.
    •The SQL Server instance hosting the Operations Manager Database or Data Warehouse is offline.
    Roger

  • Log manager throughput considerations - No more than 3840K in-flight

    Hi Experts,
    As per the documentation about Log Manager
    Log manager throughput considerations
    Outstanding log writes: 32-bit=Limit of 8, 64-bit=Limit of  32
    No more than 3840K “in-flight”
    Individual write size varies
    Up to 60KB in size
    in 64 Bit==> We can have 32 outstanding log writes and 1 log write size is max 60 KB so if there are 32 user dbs, max inflight  size can be 32*60KB==> 1920 KB. but documentation says 3840 KB
    ["No more than 3840K “in-flight"]. double the size of 1920 KB.
    This is contradictory i think.
    Manish

    Thanks for the help.
    http://blogs.msdn.com/b/sqlcat/archive/2013/09/10/diagnosing-transaction-log-performance-issues-and-limits-of-the-log-manager.aspx
    Limits of the Log Manager
    Within the SQL Server engine there are a couple limits related to the amount of I/O that can be "in-flight" at any given time; "in-flight" meaning log data for which the Log Manager has issued a write and not yet received an acknowledgement that the write has
    completed. Once these limits are reached the Log Manager will wait for outstanding I/O’s to be acknowledged before issuing any more I/O to the log. These are hard limits and cannot be adjusted by a DBA. The limits imposed by the log manager are based on conscious
    design decisions founded in providing a balance between data integrity and performance.
    There are two specific limits, both of which are per database.
    1. Amount of "outstanding log I/O" Limit.
    a. SQL Server 2008: limit of 3840K at any given time
    2. Amount of Outstanding I/O limit.
    a. SQL Server 2005 SP1 or later (including SQL Server 2008 ):
    i. 64-bit: Limit of 32 outstanding I/O’s
    Regards
    Manish

  • "Failed to retrieve data from OVM Manager"

    I'm trying to setup Cloud Control 12c Private Cloud environment. My machines in OVM pool have local disks so there is a repository per physical machine, plus one shared repository on NFS (OVM lists all those repositories in its UI)
    When I try to navigate to VM Manager->Administration->Storage Repository page here is an error dialog I get:
    "Failed to retrieve data from OVM Manager. Please check log file for details"
    I've tried performing VM Manager->Synchronize, but that didn't help.
    Let me note that I can go to other pages like Administration->Network without any problem.
    What is a log file to look into? I tried looking under /u01/app/oracle/Middleware/oms/sysman/log but found nothing useful.

    Hi Diptesh,
       What is your crystal reports version ? CRXI or higher?
    And does your filter bject consists of apostrophie s fields?
    If this is the case then this is a known issue try installing the latest service packs or fix packs to see if it resolves the issue?
    Regards,
    Vinay

  • Firewall Log Management Software

    Can anyone recommend any firewall log management software that are proven?

    Adam,
    I suggest you to try ManageEngine Firewall Analyzer.
    The product almost support all the leading vendors in the industry. The product is segregated in to the three categories and they are,
         1.Traffic
        2.Security
        3.Management
    1. Traffic Statistics:
          This will give you the complete bandwidth information that was transacted through out the network with multiple drill analysis such as Source, Destination, Protocol, Hits, Bytes Sent, Bytes Received etc. You can even do capacity planning and forecasting with the product.
    2. Security Statistics:
           Security Statistics (Reports) will display all malicious events in your network. It will help you to know the various threats and attacks to the company from outside to inside and vice versa.
    3. Management Statistics:
           This will help you to do audit and security configuration analysis which includes change management, compliance report. This will point out the loop holes of the network and assist you to fix it.
    Why Firewall Analyzer?
    *Support for Firewall and security devices from multiple vendors
    *Real-time bandwidth monitoring
    *Employee internet usage with URL monitoring
    *Real-time alerting
    *Firewall Change Management reports
    *Security Audit & Configuration Analysis reports
    *Diagnose live connections
    *Capability to view traffic trends and usage patterns (Capacity Planning)
    *Powerful search for forensic and security analysis
    *Multi-level drill down into top hosts, protocols, web sites and more
    *Network security reports
    *Firewall compliance reports
    *Flexible and secured log data archiving
    *Rebranding, User based views and dashboard for MSSP Support
    and more
    http://www.manageengine.com/products/firewall/features.html
    I recommend you to evaluate the fully functioned 30 days evaluation copy and check if it helps you to acheive your use case.
    Regards,
    Vignesh.K
    Firewall Analyzer

  • Few questions regarding Training and event management

    Dear All,
    Can you please help me with the following queries regarding Training and Event Management:
    1. How to freeze a completed business event so that no changes to its record (like delete/update attendee details or event detail) is possible?
    2. How to get feedback from attendee/faculty with a predefined format with rating (1-4 scale)
    3. How to maintain department and category wise training man/hrs or training man/days?
    Any help will be appreciated.
    Regards,
    Toa

    Hi Toa,
    1. Run report RHHISTO0 via SA38. Once the business event is flagged as "historical" no further changes can be done.
    3. SM30 - T77S0 - Make the following swtiches active:
    SEMIN AINST
    SEMIN APART
    Then when events are followed up, these training data is recorded to employee 2002 infotype. You can report on it via PT90 (for department-based selection use "Further selections")
    Regards,
    Dilek

  • F110 Status - Printing data and log have been deleted

    Hi All,
    We have an issue with F110 transaction. In the status a message is being displayed as 'Printing data and log have been deleted'. As I am new to Finance, I could not get what that message mean. Can you please explain what could be the reason for this message.
    Thank you,

    Hi Guru's,
    I have a similiar issue. Created a check run yesterday and went through all the steps...
    - Parameters have been entered
    - Payment proposal created
    - Payment run has been carried out
    - Posting orders 77, generated, 77 completed.
    *- Printing Data and log have been deleted *
    I can not remember what I click on and it cause this.
    Please can someone help and tell me know how can I either re-run this check run I mean the physical checks or how can I go about this and delete this run and re-run from Scratch.
    Best Regards,
    Yassmen

  • How to defrag after Shrinking the data and log every night

    OK, my shop has a shrink job that runs every night. It shrinks the data and log files. DBCC SHRINKFILE (filename,1). We have recovery model Simple.
    So reading that SHRINK is horrible for the data file because of fragmentation, so looking to stop the daily job. BUt this job has been running everyday for years right after the ETL. We have a big nightly ETL process that (1) truncates several
    hundred SQL tables and then BULK INSERT from the AS400 (2) for really large tables we just bring in the last X number of days from the AS400 and insert those records into the SQL table. We have very few indexes, but do have some on the really
    large tables. FYI - The SQL Servers are at the clients' sites.
    What kind of fragmentation has the SHRINK on the data file caused? how would it affect the ETL, and retrieval of data during the day? how can the fragmentation be fixed? 

    Hello,
    Databases suffer from fragmentation on indexes as explained
    here, but they suffer from physical fragmentation at the storage level also. 
    When the shrinking process shrinks a data file and recover space from the disk and the data file needs to grow again as part of your ETL process the new portion of the data file will rarely be contiguous to the rest of the data file on disk, and if this
    shrink process happens many times, the data file may be spread among many tracks on the disk. This increases the disk requests required to write data on a fragmented data file and increases disk requests to create new data files. In general, fragmentation
    will originate that what should be a simple I/O request has to be broken on many disk requests, making disk activity and performance less predictable and making disk queues larger.
    My suggestion is to stop sql server services during maintenance window, defragment disk using the Windows Defrag tool or a third party tool, when finish start SQL Server services again, then defrag indexes on your databases.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Data for travel management?

    i'm currently looking for the data for travel management (Financials Management & Controling --> Travel Management) in the DS server. i have been looking for it for  a few week and i still cant get the data. so does anyone have any idea on where the data is store????

    Hi Guys,
    I am accessing the data from PCL1 cluster too, but would like to access this within a method and while doing so it does not permit using the 'RP-IMP-CL-TE' macro or the import statement itself in its present format. Infact even the internal tables itself needs to be declared in the OO fashion without the header line.
    That being said, I tried doing an import in the following fashion.
    IMPORT gte_version TO ote_version
             statu    TO lt_statu
             beleg   TO lt_beleg
             exbel   TO lt_exbel
             abzug  TO lt_abzug
             ziel      TO lt_ziel
             zweck  TO lt_zweck
             konti    TO lt_konti
             vsch    TO lt_vsch
             kmver  TO lt_kmver
             paufa  TO  lt_paufa
             uebpa TO  lt_uebpa
             beler   TO lt_beler
             vpfps  TO lt_vpfps
             vpfpa  TO lt_vpfpa
             rot       TO lt_rot
             ruw     TO lt_ruw
             aend   TO lt_aend
             kostr   TO lt_kostr
             kostz  TO lt_kostz
             kostb  TO lt_kostb
             kostk  TO lt_kostk
             v0split TO lt_v0split
             editor  TO lt_editor
             user    TO lt_user
    FROM   DATABASE pcl1(te) ID
    gs_key ACCEPTING PADDING ACCEPTING TRUNCATION.
    where gte_version / ote_version are of type work area and remaining from STATU through USER are internal tables without a work area. Although this design checks for syntax successfully, creates an exception error (An exception with the type CX_SY_IMPORT_MISMATCH_ERROR occurred, but was neither handled locally, nor declared in a RAISING clause) during program execution.
    Can you guys through some light on what could be the problem here?
    Thanks and regards,
    Srikanth

  • Log management - Oracle PIM

    In the development and production environment, log management has been enabled for many of the functionalities and for most of the users. Hence due to this there are lot of log files created (also being created) and ending up occupying huge space. I could not find any relevant information in PIM user, implementation guides or any Knowledge base articles in Oracle tech support side. Hence I need your inputs on the following:
    •     What is the standard log management practice in Oracle PIM?
    •     What is the standard process, procedure for archival, deletion of log files?
    •     How often is archival of log files done?
    Regards,
    Ram
    +358 451172788

    Please see the following MOS Docs.
    R12 Product Information Management (PIM) Training [Video] (Doc ID 1498058.1)
    Information Center - Oracle Fusion Product Information Management ( PIM ) (Doc ID 1353460.2)
    Information Center - Troubleshooting Fusion Product Information Management (PIM) Applications. (Doc ID 1380507.2)
    Information Center: Product Information Management (PIM) (Doc ID 1310505.2)
    Guidelines and Product Definition Methodology for Oracle MDM Product Hub Integration (Doc ID 1086492.1)
    Oracle Product Hub for Communications Readme Document, Release 12.1.1 (Doc ID 885359.1)
    Thanks,
    Hussein

  • Security Audit Log SM19 and Log Management external tool

    Hi all,
    we are connecting a SAP ECC system with a third part product for log management.
    Our SAP system is composed by many application servers.
    We have connected the external tool with the SAP central system.
    The external product gathers data from SAP Security Audit Log (SM19/SM20).
    The problem is that we see, in the external tool,  only the data available in the central system.
    The mandatory parameters have been activated and the system has been restarted.
    The strategy of SAP Security Audit Log is to create many audit log file for each application server. Probably, only when SM20 is started, all audit files from all application servers are read and collected.
    In our scenario, we do not use SM20 since we want read the collected data in the external tool.
    Is there a job to be scheduled (or something else) in order to have all Security Audit Log available (from all application servers) in the central instance ?
    Thanks in advance.
    Andrea Cavalleri

    I am always amazed at these questions...
    For one, SAP provides an example report ( RSAU_READ_AUDITLOG_EXTERNAL ) to use BAPIs for alerts from the audit log yet 3rd party solutions seem to be alergic to using APIs for some reason.
    However, mainly I do not understand why people don't use the CCMS (tcode RZ20) security templates and monitor the log centrally from SolMan. You can do a million cool things in SolMan... but no...
    Cheers,
    Julius

  • Change the Data and Log file locations in livecache

    Hi
    We have installed livecache in unix systems in the /sapdb mount directory where the installer have created sapdata and sapdblog directories. But the unix team has already created two mount direcotries as follows:
    /sapdb/LC1/lvcdata and /sapdb/LC1/lvclog mount points.
    While installing livecache we had selected this locations for creating the DATA and LOG volumes. Now they are asking to move the DATA and LOG volumes created in sapdata and saplog directories to these mount points. How to move the data and log file and make the database consistent. Is there any procedure to move the files to the mount point directories and change the pointers of livecahce to these locations.
    regards
    bala

    Hi Lars
    Thanks for the link. I will try it and let u know.
    But this is livecache (even it uses MaxDB ) database which was created by
    sapinst and morover is there any thing to be adjusted in SCM and as well as
    any modification ot be done in db level.
    regards
    bala

  • Need to remove unwanted data in log file

    hai experts,
                    how to identify and remove the unwanted data from log file.plz help me.
    regards,
    pugazh.

    brarchive takes a backup of all offline redologs, then deleted them. this is done at OS level.
    if you do not need them and you do not need archiving, a permanent solution is to disable archiving:
    stop SAP first, then:
    sqlplus / as sysdba
    shutdown immediate;
    startup mount
    alter database noarchivelog;
    alter database open;
    exit
    after this, you will never be able to do a recovery point-in-time, and won't be able to start online backups.

  • Warning    Log Management

    Warning Log Management
    The date field is probably invalid on /usr/web/serveurs_web/bea_prod/wlserver6.1/config/workflow/logs/weblogic.log
    line 14. Message ignored during search.

    Hi Abhishek,
    As i mentioned earlier the Alert resolution says the same points.
    Can you give details on the below ?
    Is there really a log named "Dhcpadminevents" in the MS's Event viewer ?
    Did you recently configure any new alert where you mentioned "Dhcpadminevents"
    as a event log location ?
    If yes then what is the target you selected for the rule / monitor there ?
    Can you post the results for analysis ?
    Gautam.75801

Maybe you are looking for