Need snapshot names of clustered Data ONTAP volumes in WFA3.1 for use in user input fields

Hi, I want the operator of workflows choose from a list of snapshot of volumes (e. g. to use it as a base for a cloning operation). For this I need the snapshots of cDOT volumes in a table (e. g. cm_storage.snapshot) in order to use sql statements to select them. I am aware of the discussions in thread1 or in thread2. But the solution in thread2 cannot be imported in WFA3.1 (see my comments in both threads). Meanwhile the creator of the solution in thread2 has left NetApp and may not update it. NetApp itself seems not to be able or willing to solve this issue (either in  filling this table in WFA by default or porting the solution of thread2 to WFA3.1). I would create this table, datasource and OCUM/cluster connection by myself, if someone can tell me the steps how to do this. Is there somebody who can describe/share what to do to get such a table filled with snapshot names from clustered DataONTAP volumes? Thanks in advance. Walter

Hello, I like to know your requirement first.I can see, there is a table namely 'Snapshot' under cm_storage_smsv from which you can pick the snapshots. So I think, below query will suffice the same: SELECT    Snapshot.name AS 'name',    Snapshot.volume AS 'volume',    Snapshot.vserver AS 'vserver',    Snapshot.cluster AS 'cluster',    Snapshot.snapmirror_label AS 'snapmirror_label',    Snapshot.timestamp AS 'timestamp'FROM    cm_storage_smsv.Snapshot SnapshotWHERE          Snapshot.volume = '${VolumeName}'          AND Snapshot.cluster = '${ClusterIP}'          AND Snapshot.vserver = '${VServerName}' Let me know if I misunderstood something.Thanks                --Gaurab

Similar Messages

  • UNIFIED MANAGER ALERT : on EXPIRING SSL certificates in clustered Data ONTAP systems

    The default ssl certificates on clustered Data ONTAP systems are valid for 1 year.
    Since we have cDOT clusters monitored via Oncommand Unified Manager 6.2, we would like Unified Manager to alert on expiring Certificates.
    Is this possible in OCUM 6.2?
    Thanks

    Thanks Saravanan, Initially i had it on RHEL 6.6, and i see some of the existing packages were of a older version and created some issues while rrdtool and sql installation. but i managed to do the installation and faced the same issue. I Didnt know that this is a user account issue not a package dependency issue.and thats the reason i got my server upgraded to RHEL 7.1 and the installation went fine but the same issue. But its working for now, thanks again :-)

  • PDF Preview: Making the Transition To Clustered Data ONTAP

    NetApp has refined its tools and processes for a smooth transition to clustered Data ONTAP. The latest software release removes the remaining barriers to entry, so if you’ve been holding back it’s time to make the move from 7-Mode to take full advantage of nondisruptive operations, scale-out, and more. This article explains the transition framework and provides links to the latest resources and tools.

    Please can someone help with NSO-157 exam study guide

  • How to extract data in order to create file for using program RFBIBL00?

    Hi all!
    In order to correct data in mass I need to select corresponding data (3000 FI documents) and extract these into a file in order to correct a field. This extract must having segment 1BKPF 2BSEG... the same file structure than in SXDA_TOOLS for FI documents.
    Which program must I use in order to have the correct extract in order to use the standard program RFBIBL00 for creating FI document?
    Must I create a specific program in order to create this file or only administrator can do this?
    Thanks for your help!
    David

    Hi
    RFBIBL00 is arranged on main strucuters BGR00 (for batcinput session data), BBKPF (for header data) and BBSEG (for item data).
    Both structures BBKPF and BBSEG have to be filled, but only the fields have to be used or changed, the rest of the fields have to have the symbol for NO DATA, i.e /.
    So there are no standard program can do it automatically, so it needs to creaste a program ad hoc....but it can use the Legacy System Migration Worbech (transaction LSMW) in order to do it,
    so your problem can be really how to extract the data to be elaborated by LSMW, but you can do a simple query by se16 and download the result in a excel file
    Max

  • How do you save dynamic data type, from the DAQ assistant, for use in Excel or matlab?

    Currently, I have the following basic VI setup to save Data from my PCI6221 Data Aquisition Card.  The problem I'm having is I keep getting the last iteration of the while loop in the measurement file and that's pretty much it.  When I try to index the data leaving the loop it gives me a 2D array of Data which cannot be input into the "Write to Measurement File" VI.  How would I save this to a useful Data/time step format?  I was wondering of a way to continuously collect the Data and then save it in a large measurement file that I would manipulate in Matlab/excel?  Am I using the wrong type of loop for this application?  I also noticed my Dynamic Data array consists of data, time, timestep and then a vector of the data taken.  Is it possible to just get a vector of the time change per sample alongside the data?    Sorry for the barrage of questions but any help would be greatly appreciated, and thanks in advance!
    -Bryan
    Attachments:
    basic DAQ.vi ‏120 KB

    There is a VI in the Express > Signal Manipulation palette called "From DDT" that lets you convert from the Dynamic Data Type to other data types that are more compatible with operations like File I/O....for instance, you could convert your DDT into a 2D array and use the Write To Spreadsheet File.vi.  Just a thought...
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Need help in logging JTDS data packets

    Hi All,
    I m having web application which uses SQL Server database.
    I have to find out some problems in database connection for that there is need to log the jtds data packets.
    I have tried to use class net.sourceforge.jtds.jdbc.TdsCore but in constructor of TdsCore class there are two parameters needed one is ConnectionJDBC2 and another is SQLDiagnostic.
    I have tried a lot but it did not allow me to import class *SQLDiagnostic*.
    I need help in logging JTDS data packets. If there are any other ways or any body having any idea about logging JTDS data packets/SQLDiagnostic.
    Please reply it is urgent...!!
    Thanks in advance......!!

    if you want to use log4j then,
    in your project create a file called log4j.properties and add this
    # Set root logger level to INFO and its only appender to ConsoleOut.
    log4j.rootLogger=INFO,ConsoleOut
    # ConsoleOut is set to be a ConsoleAppender.
    log4j.appender.ConsoleOut=org.apache.log4j.ConsoleAppender
    # ConsoleOut uses PatternLayout.
    log4j.appender.ConsoleOut.layout=org.apache.log4j.PatternLayout
    log4j.appender.ConsoleOut.layout.ConversionPattern=%-5p: [%d] %c{1} - %m%n
    log4j.logger.org.apache.jsp=DEBUG
    #Addon for
    com.sun.faces.level=FINEGo to your class and add this line
    private static final Logger logger = Logger.getLogger("classname");and then you can use
    logger.info();
    logger.error();
    methods

  • User input in date format in data form is not identified in BR

    Hi all,
    In a data form,user provide input for start month and end month against account members "Transfers Start_Month" and "Transfers End_Month".
    Eg:
    Transfers Start_Month = Jul
    Transfers End_Month =
    Nov
    When i tried to use the above 2 members in BR  as below,
    @MEMBER(@NAME("Transfers End_Month"->"BegBalance"->"SY_Forecast"->"MF04"))
    so that it should return the user input month.Eg:"Nov"
    Above method is not working and its not returning any months.
    I tried using CDF function as below,which also dint return the user input month.
    @MEMBER(@NAME(@HspDateToString("Transfers End_Month"->"BegBalance"->"SY_Forecast"->"MF04")))
    Kindly enlighten on how to handle this scenario.I need the user input month to be returned in BR.
    Thanks!

    Hi,
    Please find the below IF condition where we try to check if the period dim member is between "Jan" to the value entered by user in data form(Eg:"Nov"),then assign 200 to headcount.
    IF(@ISMBR("Jan":@MEMBER(@NAME("Transfers End_Month"->BegBalance->SY_Forecast->MF04))))
    headcount->SY_Forecast->MF04=200;
    ENDIF;
    In above script,@MEMBER(@NAME("Transfers End_Month"->Begbalance->SY_Forecast->MF04)) should return the value entered by user in data form(Eg: "Nov")
    Note: Jan to Dec are level 0 members of period dim
    Kindly enlighten if any additional functions/conversions from date to string i'm missing out.
    Yes,i'm using calc manager to write the rule.
    Thanks!

  • SmartSync Application:Passing Input Field parameters& Fetching Data.

    Hello SDNers
    I have created a sample Smart Sync Application from Custom Syncbo (Search Functionality).
    Also Created a Custom Jsp with One button for "GetList" and 3 Input Fields namely Name(mandatory), Email, Address and 1 Table.
    Overall fuctionality (also of sysncbo ) such that after filling the fields, wenever person will press Getlist, these three parameters shud get passed to backend and then fetch data to fill the table..
    " I dont have ne clue regarding how to pass parameters and fetch details.."
    Can anyone give me a code for the same or tell me how to access DB parameters mentioned in Bapiwrapper/syncbo.
    Kindly Help..
    Thanks n Regards....

    Hi Chetan,
    First I have some queries on your post.
    When you say Smart Sync Application, you created it from NWDS using MDK plugin right..?
    when you say access table etc u mean tables related to Sync BOs in Mobile deivice database Db2e right.?
    When you say on click of getlist button it should pass parameters to backend in the sense MO bile device backend i.e., db2e not MI or R/3.
    but any how, when you create a Smart Sync Application in NWDS using MDK plugin for meRepMeta.xml MDK plug in will generate all the APIs interfacing to your data on device.
    using these APIs you can set,get data from the local database but it all depends on the properties of the fields in SyncBO.
    Regards,
    Sai.

  • Need BAPI name to change the PO data in SRM.

    HI All,
    Need BAPI name to change the PO data in SRM.
    The BAPI_PO_CHANGE is not available in SRM.
    If there is no BAPI to change PO in SRM, what is the alternate solution?
    Regards,
    A.I.Rajesh.

    hello,
    try using FM BBP_PD_PO_UPDATE.
    regards.

  • Need user name in header data when posting f-65

    Hi all,
    I had a problem while posting accounting Document through Workflow. Whever i am posting the document manually my user name is entered in the header data of posted document. But whenever i am posting it through workflow i am getting the entered username as wf-batch. But i want the user name of the user who excuted the work item.
    please let me know ASAP.
    Thanks
    chandu

    This is a very common issue. The user name is the name of the user who actually put the record in the database which is WF-BATCH in case of workflow. You can then find out the time the record was created and look for the actual user in the workflow log (transaction SWI1). If you still want the actual user, I'm afraid that you'll need to create a custom field and enhance your workflow.

  • Hat is this "Time Machine couldn't complete the backup to "***Network Name***' The Backup disk image "/Volumes/Data/MacBook Pro.sparsebundle" is already in use"

    "Time Machine couldn’t complete the backup to “***Network Name***’ The Backup disk image “/Volumes/Data/MacBook Pro.sparsebundle” is already in use"
    What does this mean and how do you fix it?

    I got that the file is in use, what is the trick to release the file.  This has something to do with X.8, never had this problem befor I uploaded .8

  • HT3275 I need help.  Keep getting error message he backup disk image "/Volumes/Data/MacBook Pro.sparsebundle" is already in use.

    I keep recieving he following error he backup disk image “/Volumes/Data/ MacBook Pro.sparsebundle” is already in use.  for time machine backup.  Any suggestions

    It is standard for Lion and Mountain Lion.
    Reboot the TC to fix it.
    If you actually run Lion then you can install 5.6 utility and just go to the disk page and click disconnect all users.. this is usually enough to fix it.. on the previous backup TM hasn't dismounted the sparsebundle correctly and so cannot mount it again.
    See C12. http://pondini.org/TM/Troubleshooting.html

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Need a Query that Returns both Column Name with Column Data

    Hi,
    Hope someone can assist quite quickly. I'm after a query that will return me both column name together with column data, i.e
    Table: APP_INFO
    COL1  - currently has the value of 10
    COL2  - currently has the value of 'HELLO'
    COL3  - currently has the value of 'QWERTY'
    COL4  - currently has the value of 2000Query I'm after is to return the following result set: [actual column name, actual column data]
    COL1,10
    COL2,'HELLO',
    COL3,'QWERTY'
    COL4,2000
    Any help would be much appreciated.
    Thanks.
    Tony.

    Like this ?
    SQL> select empno, ename, deptno from emp where deptno = 10;
         EMPNO ENAME          DEPTNO
          7782 CLARK              10
          7839 KING               10
          7934 MILLER             10
    SQL> select decode(t.id,1,'EMPNO',2,'ENAME',3,'DEPTNO') COLNAME,
      2  decode(t.id,1,to_char(empno),2,ename,3,deptno)
      3  from (select emp.*, rownum rn from emp
      4  where deptno = 10) emp, (select rownum id from dict where rownum <=3) t
      5  order by emp.rn, t.id
      6  /
    COLNAM DECODE(T.ID,1,TO_CHAR(EMPNO),2,ENAME,3,D
    EMPNO  7782
    ENAME  CLARK
    DEPTNO 10
    EMPNO  7839
    ENAME  KING
    DEPTNO 10
    EMPNO  7934
    ENAME  MILLER
    DEPTNO 10
    9 rows selected.Rgds.

  • Do we need Snapshots on our aggregates?

    Dear mates, our question is very simple.We have two Storage Processors on a FAS2240-4 Chassis that are replicating volumes (Through SnapMirror technology) to another FAS2240 Chassis on a remote/different DataCenter.We do NOT replicate the whole aggregates that contain these volumes and we will NOT need to revert an entire aggregate to a previous state.In this scenario, is it needed/advisable to have:- The aggregate snapshots scheduling active ?> snap sched -AAggregate aggr0: 0 1 4@9,14,19Aggregate aggr1: 0 1 4@9,14,19- The 5% aggregate snapshot reserve?netapp2-spa> aggr options aggr0...... percent_snapshot_space=5%, May we disable scheduling and set percent_snapshot_space=0%?Thank you so much in advance

    That's the default out of the box configuration.In my case, I do snap sched -A aggr0 0 immediately and remote any snaps that might be there.I don't see a reason why you need aggregate snapshots in your caseAlso, another way to check your aggrs reserve issnap reserve -A From: severino santirso fernandez <[email protected]>To: Josh Goldfarb <[email protected]>, Date: 04/22/2014 12:25 PMSubject: - Do we need Snapshots on our aggregates?Do we need Snapshots on our aggregates? created by severino santirso fernandez in Data ONTAP 7G/8.x 7-Mode - View the full discussion Dear mates, our question is very simple.We have two Storage Processors on a FAS2240-4 Chassis that are replicating volumes (Through SnapMirror technology) to another FAS2240 Chassis on a remote/different DataCenter.We do NOT replicate the whole aggregates that contain these volumes and we will NOT need to revert an entire aggregate to a previous state.In this scenario, is it needed/advisable to have: - The aggregate snapshots scheduling active ?snap sched -AAggregate aggr0: 0 1 4@9,14,19Aggregate aggr1: 0 1 4@9,14,19 - The 5% aggregate snapshot reserve?netapp2-spa> aggr options aggr0...... percent_snapshot_space=5%, May we disable scheduling and set percent_snapshot_space=0%? Thank you so much in advanceReply to this message by replying to this email -or- go to the message on NetApp CommunityStart a new discussion in Data ONTAP 7G/8.x 7-Mode by email or at NetApp Community

Maybe you are looking for