Generating category tree failed! on secmon reporting

This morning I wanted to create a new report in the security monitor of our vms 2.2 server.
I selected reports and the definitions> create > ids alarms report and then proceeded to the try select some specific signatures from the ids signature dropdown menu below. In the part where the signature categories are meant to appear, there is an error message.
“Generating category tree failed!”
If I point at the error with the mouse
“Error while building the SETI category tree”
I have attached a jpeg file that shows the error message.
Is this a bug and if so, how can we fix it?

Once again, I guess I should search the bug database before opening new conversations. Many apologies for that…
I found the bug in the bug database at Cisco. The bug is entitled
“SecMon: cannot create report based on signature ID” (CSCsc31745)
According to the bug:
The report section tries to grab the latest signatures to include in the "IDS Signature" section, it does this based on the latest signature level then the latest service pack number. If there is say, 4.1(5)S197 AND 5.0(4)S197 installed on the VMS server, the report will try to use 4.1(5)S197 because its SP number is later. This fails and the "Generating Category Tree Failed" error is displayed. (auto signature download is responsible for that!!)
The Workaround is to replacement a Java file that resolves this issue which can be obtained by opening a TAC case and referencing this bug ID.

Similar Messages

  • Reports successfully execute but generate a login failed for user 'sa' err

    I am running Crystal Reports Server XI R2. Classic ASP is used to generate embedded reports within our application. The crystal reports use ODBC to connect to a SQL Server 2005 database. The reports successfully generate in our Classic ASP application. However in SQL Server 2005,  the following error messages are being generated each time a crystal report is ran:
    - Login failed for user 'sa'.
    - Error: 18456, Severity: 14, State: 8.
    We know that we are passing the correct username/password to the crystal reports, becuase they execute successfully.
    It appears that when the report is called, Crystal Reports appears to conect to SQL Server using a username/password that we didn't provide it at execution time, this fails and the SQL Server 'login failed for user' is generated. Then it runs the report using the username/password we provide and it successfully generates a report.
    I have ran Profiler against the SQL Server Database and the 'Login failed for user 'sa' ' errors have a ApplicationName of either 'Seagate Crystal Reports' or 'Crystal Reports'. Therefore I know it is Crystal Reports generating these errors in SQL Server.
    Does anyone have any ideas on how to stop these SQL Server 'Login failed for user 'sa' ' errors being generated?

    What happens if you use Profiler when running the report using Crystal Report Designer?
    If the report is ran through Crystal Report Designer, NO 'Login failed for user' error messages appear in profiler. Everything looks ok when ran in Designer.
    Also, need to know what patch level you are on?
    We are running Crystal Reports Server XI Release 2, Version: 11.5.8.8265
    No additional patchs have been applied since Crystal Reports Server was installed.
    We may try the SA account if one of the connections fail to log on with the credentials you provided. Verify the user you logged on with has rights to all tables.
    The reports were running through our application using the SA account. The SA account has permissions to these tables. The reports do generate results and appear perfectly fine in the application.  The issue is that when our application requests the report from Crystal Reports Server. Crystal Reports Server delivers the correct report to our application. However during the process to generate the report, 'login failed for user 'sa' ' errors are being generated in SQL Server 2005.
    Also, I have tried creating a completely new SQL user called 'crystaluser'. I ran the report using Crystal Report Designer and used the crystaluser logon and saved the report. Then I ran the same report through our application. SQL Profiler will then display 'login failed for user 'crystaluser''. 
    Its seems as though Crystal Reports Server is first executing the report using the default SQL user saved with the report, but is either sending a blank password or no password at all. This generates the 'login failed for user' in SQL Server 2005. But it then uses the SQL username/password my application gives it and successfully generates the report. Of course this is only speculation.

  • Database logon failed when opening report with parameter values in CrystalReportViewer

    Hi,
    I designed 2 crystal reports: report A contains parameter fields and report B did not contain any parameter
    I can open both reports in development site using CrystalReportViewer control. When I open the reports in testing site,
    I can open report B but can't open report A. It display error message "Database logon failed". When I set EnableDatabaseLogonPrompt
    to true and try to open the report A again, it shows database connection data which was created on report.
    In addition, it is strange that it displays error "Database logon failed" when I click an item in group tree panel of the report B. This indicates that it can load data from database
    in testing site but it connect to development database when clicking items in group tree panel
    All reports connect to database using Windows Authenication. It use dynamic database connection at runtime
    How to ensure the report always connect database using login data dynamically at runtime?
    Below is my code about database connection:
    string strServerName = null;
    string strDatabaseName = null;
    ReportDocument rptDoc = new ReportDocument();
    rptDoc.Load(strFilePath);
    ConnectionInfo connInfo = new ConnectionInfo();
    TableLogOnInfo logonInfo;
    for (int i = 0; i < rptDoc.Database.Tables.Count; i++)
    logonInfo = rptDoc.Database.Tables[i].LogOnInfo;
    ReportHelper.GetReportConnection(ref strServerName, ref strDatabaseName, strSystemType);
    logonInfo.ConnectionInfo.ServerName = strServerName; 
    logonInfo.ConnectionInfo.DatabaseName = strDatabaseName;
    logonInfo.ConnectionInfo.IntegratedSecurity = true;
    rptDoc.Database.Tables[i].ApplyLogOnInfo(logonInfo);
    rptDoc.Database.Tables[i].Location = rptDoc.Database.Tables[i].Location.Substring(0, rptDoc.Database.Tables[i].Location.Length - 2);
    crvViewer.ReportSource = rptDoc;
    crvViewer.DataBind();
    Development environment:
    - SAP Crystal Reports 2013 Support Pack 1
    - Visual Studio Professional 2012
    - .NET Framework 3.5
    - DDL
    CrystalDecisions.Shared (v 13.0.8.1216)
    CrystalDecisions.Web (v 13.0.8.1216)
    CrystalDecisions.CrystalReports.Engine (v 13.0.8.1216)
    Database connection in crystal report:
    - Database Type: OLEDB (ADO)
    - Provider: SQLOLEDB
    - Integrated Security: True
    Thanks and Regards,
    Tony

    Hi Tonylck
    Try  to pass login info to crystal report dynamically as follows:
    using System;
    using System.Windows.Forms;
    using CrystalDecisions.CrystalReports.Engine;
    using CrystalDecisions.Shared;
    namespace WindowsApplication1
    public partial class Form1 : Form
    public Form1()
    InitializeComponent();
    private void button1_Click(object sender, EventArgs e)
    ReportDocument cryRpt = new ReportDocument();
    TableLogOnInfos crtableLogoninfos = new TableLogOnInfos();
    TableLogOnInfo crtableLogoninfo = new TableLogOnInfo();
    ConnectionInfo crConnectionInfo = new ConnectionInfo();
    Tables CrTables ;
    cryRpt.Load("PUT CRYSTAL REPORT PATH HERE\CrystalReport1.rpt");
    crConnectionInfo.ServerName = "YOUR SERVER NAME";
    crConnectionInfo.DatabaseName = "YOUR DATABASE NAME";
    crConnectionInfo.UserID = "YOUR DATABASE USERNAME";
    crConnectionInfo.Password = "YOUR DATABASE PASSWORD";
    CrTables = cryRpt.Database.Tables ;
    foreach (CrystalDecisions.CrystalReports.Engine.Table CrTable in CrTables)
    crtableLogoninfo = CrTable.LogOnInfo;
    crtableLogoninfo.ConnectionInfo = crConnectionInfo;
    CrTable.ApplyLogOnInfo(crtableLogoninfo);
    crystalReportViewer1.ReportSource = cryRpt;
    crystalReportViewer1.Refresh();
    Ref
    http://csharp.net-informations.com/crystal-reports/csharp-crystal-reports-dynamic-login.htm
    Mark as answer if you find it useful
    Shridhar J Joshi Thanks a lot

  • Filesystem Restore is getting failed "NDMP server reported a general error"

    When i performing filesystem restore to different location, its getting failed with the error message "NDMP server reported a general error (name not found?)" whereas restoring
    in the same location is getting success without any error.
    Please find the attached transcript output for the failed job with debug on.
    ob>catxcr -fl0 admin/80
    2012/09/04.13:17:33 ______________________________________________________________________
    2012/09/04.13:17:33
    2012/09/04.13:17:33 Transcript for job admin/80 running on backup-server
    2012/09/04.13:17:33
    2012/09/04.13:17:33 (amh) qdv__automount_in_mh entered
    2012/09/04.13:17:33 (amh) qdv__automount_in_mh tape at 2012/09/04.13:17:33, flags 0x100
    2012/09/04.13:17:33 (amh) mount volume options list contains:
    2012/09/04.13:17:33 (amh) vtype 1 (rd), vid DC-ORCL-MF-000001, vs_create 1346566310, family (null), retain (null), size 0,
    mediainfo 2, scratch 0
    2012/09/04.13:17:34 (amh) don't preserve previous mh automount state
    2012/09/04.13:17:34 (gep) getting reservation for element 0x1 (dte)
    2012/09/04.13:17:34 (una) unload_anywhere entered
    2012/09/04.13:17:34 (fal) find_and_load entered
    2012/09/04.13:17:34 (fal) calling find_vid2 for volume DC-ORCL-MF-000001
    2012/09/04.13:17:34 (fal) find_vid2 worked - volume DC-ORCL-MF-000001 in se11 (not in drive)
    2012/09/04.13:17:34 (fal) moving volume FL-MF-000001 from se11 to dte1 (tape)
    2012/09/04.13:18:12 (fal) load of tape worked; returning to do automount
    2012/09/04.13:18:12 (fal) find_and_load exited
    2012/09/04.13:18:12 (atv) qdv__automount_this_vol entered
    2012/09/04.13:18:12 (atv) calling qdv__mount
    2012/09/04.13:18:12 (mt) qdv__read_mount_db() succeeded, found vol_oid 0
    2012/09/04.13:18:20 (mt) qdv__read_label() succeeded; read 65536 bytes
    2012/09/04.13:18:20 (mt) exp time obtained from label
    2012/09/04.13:18:20 (mt) qdb__label_event() returned vol_oid 137
    2012/09/04.13:18:20 (mt) setting vol_oid in mount_info to 137
    2012/09/04.13:18:20 (mt) updated volume close time from db
    2012/09/04.13:18:20 (atv) qdv__mount succeeded
    2012/09/04.13:18:20 (atv) automount worked
    2012/09/04.13:18:20 (atv) qdv__automount_this_vol exited
    2012/09/04.13:18:20 (gep) getting reservation for element 0x1 (dte)
    2012/09/04.13:18:20 (amh) 0 automount worked - returning
    2012/09/04.13:18:20 (amh) end of automount at 2012/09/04.13:18:20 (0x0)
    2012/09/04.13:18:20 (amh) returning from qdv__automount_in_mh
    2012/09/04.13:18:20 Info: volume in tape is usable for this operation.
    13:18:20 OBTR: obtar version 10.4.0.1.0 (Solaris) -- Fri Sep 23 23:41:16 PDT 2011
    Copyright (c) 1992, 2011, Oracle. All rights reserved.
    13:18:20 OBTR: obtar -Xjob:admin/80 -Xob:10.4 -xOz -Xbga:admin/80 -JJJJv -y /usr/tmp/[email protected] -Xrdf:admin/80 -e DC-ORCL-
    MF-000001 -F3 -f tape -Xrescookie:0xBE1A8F2 -H client01 -u
    13:18:20 RRDF: restore "/wdn/file01" as "/restore", pos 000043290003
    13:18:20 OBTR: running as root/root
    13:18:20 OBTR: record storage set to internal memory
    13:18:20 ATAL: reserved drive tape, cookie 0xBE1A8F2
    13:18:20 OBTR: obsd=1, is_job=1, is_priv=0, os=3
    13:18:20 OBTR: rights established for user admin, class admin
    13:18:20 SUUI: user info root/root, ??/??
    13:18:21 MAIN: using blocking factor 128 from media defaults/policies
    13:18:21 STTY: background terminal I/O or is a tty
    13:18:21 MAIN: interactive
    13:18:21 DOLM: nop (for tape (raw device "/dev/obt1"))
    13:18:21 DOLM: ok
    13:18:22 RLE: connecting to volume/archive database host
    13:18:22 RLE: device tape (raw device "/dev/obt1")
    13:18:22 RLE: mount_info is valid
    13:18:22 RLE: qdb__device_spec_se reports vol_oid 0, arch_oid 0
    13:18:22 A_O: using max blocking factor 128 from media defaults/policies
    13:18:22 A_O: tape device is local
    13:18:22 A_O: Devname: HP,Ultrium 4-SCSI,H61W
    13:18:22 Info version: 11
    13:18:22 WS version: 10.4
    13:18:22 Driver version: 10.4
    13:18:22 Max DMA: 2097152
    13:18:22 Blocksize in use: 65536
    13:18:22 Query frequency: 134217728
    13:18:22 Rewind on close: false
    13:18:22 Can compress: true
    13:18:22 Compression enabled: true
    13:18:22 Device supports encryption: true
    13:18:22 8200 media: false
    13:18:22 Remaining tape: 819375104
    13:18:22 A_GB: ar_block at 0x100352000, size=2097152
    13:18:22 A_GB: ar_block_enc at 0x100554000, size=2097152
    13:18:22 ADMS: reset library tape selection state
    13:18:22 ADMS: reset complete
    13:18:22 GLMT: returning "", code = 0x0
    13:18:22 VLBR: from chk_lm_tag: "", code = 0x0
    13:18:22 VLBR: tag on label just read: ""
    13:18:22 VLBR: master tag now ""
    13:18:22 RLE: noticed volume TEST-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
    13:18:22 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
    (alv) backup image label is valid, file 1, section 1
    (ial) invalidate backup image label (was valid)
    13:18:22 RSMD: rewrote mount db for tape
    13:18:22 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:18:22 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:18:22 CALE: created backup section oid list entry for oid 369
    13:18:22 PF: here's the label at the current position:
    Volume label:
    Intro time: Fri May 04 13:35:03 2012
    Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Volume ID: TEST-MF-000001
    Volume sequence: 1
    Volume set owner: root
    Volume set created: Sun Sep 02 11:56:50 2012
    Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
    Volume set expires: Sat Mar 02 11:56:50 2013
    Media family: TEST-MF
    Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Archive label:
    File number: 1
    File section: 1
    Owner: root
    Client host: client01
    Backup level: 0
    S/w compression: no
    Archive created: Sun Sep 02 11:56:50 2012
    Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
    Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
    Encryption: off
    Searching tape for requested file. Please wait...
    13:18:22 PF: spacing forward 2 FMs
    13:18:24 VLBR: not at bot: 0x90000000
    13:18:24 VLBR: tag on label just read: ""
    13:18:24 VLBR: master tag now ""
    13:18:24 RLE: noticed volume TEST-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
    13:18:24 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 380
    (alv) backup image label is not valid
    13:18:24 ULVI: set mh db volume id "TEST-MF-000001" (retid ""), volume oid 137, code 0
    13:18:24 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:18:24 CALE: created backup section oid list entry for oid 380
    13:18:24 VLBR: setting last section flag for backup section oid 369
    13:18:24 PF: here's the label at the current position:
    Volume label:
    Intro time: Fri May 04 13:35:03 2012
    Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Volume ID: TEST-MF-000001
    Volume sequence: 1
    Volume set owner: root
    Volume set created: Sun Sep 02 11:56:50 2012
    Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
    Volume set expires: Sat Mar 02 11:56:50 2013
    Media family: TEST-MF
    Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Archive label:
    File number: 3
    File section: 1
    Owner: root
    Client host: client01
    Backup level: 0
    S/w compression: no
    Archive created: Tue Sep 04 11:53:17 2012
    Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
    Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
    Encryption: off
    13:18:24 PF: at desired location
    13:18:24 ACFD: positioning (SCSI LOCATE) is available for this device
    13:18:24 ADMS: reset library tape selection state
    13:18:24 ADMS: reset complete
    13:18:24 VLBR: not at bot: 0x90000000
    13:18:24 VLBR: tag on label just read: ""
    13:18:24 VLBR: master tag now ""
    13:18:24 RLE: noticed volume DC-ORCL-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
    13:18:24 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 380
    (alv) backup image label is not valid
    13:18:25 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:18:25 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:18:25 CALE: found existing backup section oid list entry for oid 380
    13:18:25 ADMS: reset library tape selection state
    13:18:25 ADMS: reset complete
    13:18:25 RLE: read volume DC-ORCL-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
    13:18:25 RLE: qdb__read_se reports vol_oid 137, arch_oid 380
    (alv) backup image label is not valid
    13:18:25 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:18:25 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:18:25 PTNI: positioning to "/wdn/file01" at 000043290003
    13:18:27 CNPC: data host reports this butype_info:
    13:18:27 CNPC: tar (attr 0x2C78: B_DIRECT, R_DIRECT, B_INCR, R_INCR, B_FH_DIR)
    13:18:27 CNPC: DIRECT = y
    13:18:27 CNPC: HISTORY = y
    13:18:27 CNPC: LEVEL = 0
    13:18:28 SNP: connection types supported by mover -
    13:18:28 tcp
    13:18:28 SNP: connection types supported by data service -
    13:18:28 tcp
    13:18:28 SNP: selected data connection type: tcp
    13:18:28 SNP: using separate data and tape/mover connections
    13:18:28 SNP: the NDMP protocol version for tape/mover is 4, for data is 4
    13:18:28 SNP: backup-server's NDMP tape/mover service session id is 7844
    13:18:28 RONPC: begin OSB NDMP data restore
    13:18:28 RONPC: need to restore from "/wdn/file01" tree:
    13:18:28 RONPC: tape position before restore is 000043290004
    13:18:28 MGS: ms.record_size 65536, ms.record_num 0x0, ms.bytes_moved 0x0
    13:18:28 RONPC: about to start restore; here are the environment variables:
    13:18:28 RONPC: env BEGINTREE=1
    13:18:28 RONPC: env NAME=/wdn/file01
    13:18:28 RONPC: env AS=/restore
    13:18:28 RONPC: env POSLEN=6
    13:18:28 RONPC: env POS=
    13:18:28 RONPC: env BLEVEL=0
    13:18:28 RONPC: env FIRSTCH=1
    13:18:28 RONPC: env POS_HERE=1
    13:18:28 RONPC: env EX2KTYPE=
    13:18:28 RONPC: env DATA_BLOCK_SIZE=64
    13:18:28 RONPC: env SKIP_RECORDS=3
    13:18:28 RONPC: env LABEL_VERSION=0000012
    13:18:28 SMW: setting NDMP mover window to offset 0x0, length 0xFFFFFFFFFFFFFFFF
    13:18:28 MLIS: mover listen ok for tcp connection; these addresses were reported:
    13:18:28 MLIS: 0.0.0.0:58243
    13:18:28 MLIS: 192.168.3.109:58243
    13:18:28 RONPC: tape fileno/blockno before restore are 0/0
    13:18:28 APNI: a preferred network interface does not apply to this connection
    13:18:28 DPNI: load balancing is in use, skipping default PNI
    13:18:28 RONPC: directing data service to connect to mover
    13:18:01 PPVL: obtar option OB_JOB = admin/80
    13:18:01 PPVL: obtar option OB_RB = 10.4
    13:18:01 PPVL: obtar option OB_EXTR = 1
    13:18:01 PPVL: obtar option OB_EXTRACT_ONCE = 1
    13:18:01 PPVL: obtar option OB_DEBUG = 1
    13:18:01 PPVL: obtar option OB_DEBUG = 1
    13:18:01 PPVL: obtar option OB_DEBUG = 1
    13:18:01 PPVL: obtar option OB_DEBUG = 1
    13:18:01 PPVL: obtar option OB_VERBOSE = 1
    13:18:01 PPVL: obtar option OB_CLIENT = client01
    13:18:01 PPVL: obtar option OB_HONOR_IN_USE_LOCK = 1
    13:18:01 PPVL: obtar option OB_STAT = 1
    13:18:01 PPVL: obtar option OB_VOLUME_LABEL = 1
    13:18:01 PPVL: obtar option OB_SKIP_CDFS = 1
    13:18:01 PPVL: obtar option OB_DEVICE = tape
    13:18:01 PPVL: obtar option OB_BLOCKING_FACTOR = 128
    13:18:01 PPVL: obtar option OB_VERIFY_ARCHIVE = no
    13:18:01 PPVL: obtar option OB_PQT = 134217728
    13:18:01 DSIN: 2GB+ files are supported, 2GB+ directories are supported
    13:18:01 SETC: identity is already root/root
    13:18:28 qtarndmp__ssl_setup: SSL has been disabled via the security policy
    13:18:28 RONPC: issuing NDMP_DATA_START_RECOVER
    13:18:33 RONPC: started NDMP restore
    13:18:33 MNPO: received NDMP_NOTIFY_DATA_READ, offset 0x0, length 0xFFFFFFFFFFFFFFFF
    13:18:33 MNPO: sent corresponding NDMP_MOVER_READ
    13:18:33 QTOS: received osb_stats message for job admin/80, kbytes 64, nfiles 0
    13:18:33 await_ndmp_event: sending progress update
    13:18:33 SPU: sending progress update
    Error: Could not make file /restore: Is a directory
    13:19:27 MNPO: jumped over filemark fence
    13:19:27 VLBR: not at bot: 0x90000000
    13:19:27 VLBR: tag on label just read: ""
    13:19:27 QTOS: received osb_stats message for job admin/80, kbytes 3145856, nfiles 0
    13:19:27 VLBR: master tag now ""
    13:19:27 RLE: set kb remaining to 819375104
    13:19:27 RLE: qdb__set_kb_rem_se reports vol_oid 0, arch_oid 0
    13:19:27 RLE: noticed nil label
    13:19:27 RLE: qdb__noticed_se reports vol_oid 0, arch_oid 0
    13:19:27 VLBR: setting last section flag for backup section oid 380
    13:19:27 MNPO: sent successful mover close
    13:19:27 MNPO: data service halted with reason=internal error
    13:19:27 SNPD: Data Service reported bytes processed 0xC0020000
    13:19:27 SNPD: stopping NDMP data service (to transition to idle state)
    13:19:27 MNPO: mover halted with reason=connection closed
    13:19:27 MGS: ms.record_size 65536, ms.record_num 0xC002, ms.bytes_moved 0xC0020000
    Error: NDMP operation failed: unspecified error reported (see above)
    13:19:27 RONPC: finished NDMP restore with status 97
    13:19:27 RONPC: NDMP read-ahead positioned tape past filemark; backing up
    13:19:27 RONPC: We believe this because initial file # 0 isn't end file # 1
    13:19:27 RONPC: the section-relative block number at end of restore is 0x1
    13:19:27 RONPC: tape position after restore is 0001032B0080
    13:19:27 QREX: exit status upon entry is 97
    13:19:27 QREX: released reservation on tape drive tape
    13:19:27 RDB: reading volume record for oid 137
    13:19:27 RDB: reading section record for oid 369
    13:19:27 RDB: adding record for oid 369 (file 1, section 1) to section list
    13:19:27 RDB: reading section record for oid 378
    13:19:27 RDB: adding record for oid 378 (file 2, section 1) to section list
    13:19:27 RDB: reading section record for oid 380
    13:19:27 RDB: adding record for oid 380 (file 3, section 1) to section list
    13:19:27 RDB: file 1 has all 1 required sections; clearing incomplete backup flags
    13:19:27 RDB: reading section record for oid 369
    13:19:27 RDB: file 2 has all 1 required sections; clearing incomplete backup flags
    13:19:27 RDB: reading section record for oid 378
    13:19:27 RDB: file 3 has all 1 required sections; clearing incomplete backup flags
    13:19:27 RDB: reading section record for oid 380
    13:19:27 RDB: 1 volumes in volume list
    13:19:27 RDB: volume oid 137 reports first:last files of 1:3
    13:19:27 RDB: marking volume oid 137 as authoritative
    13:19:27 VMA: reading volume record for oid 137
    13:19:27 RLYX: exit status 97; checking allocs...
    13:19:27 RLYX: from mm__check_all: 1
    ob> catxcr -fl0 admin/81
    2012/09/04.13:19:29 ______________________________________________________________________
    2012/09/04.13:19:29
    2012/09/04.13:19:29 Transcript for job admin/81 running on backup-server
    2012/09/04.13:19:29
    2012/09/04.13:19:30 Info: mount data verified.
    2012/09/04.13:19:30 Info: volume in tape is usable for this operation.
    13:19:31 OBTR: obtar version 10.4.0.1.0 (Solaris) -- Fri Sep 23 23:41:16 PDT 2011
    Copyright (c) 1992, 2011, Oracle. All rights reserved.
    13:19:31 OBTR: obtar -Xjob:admin/81 -Xob:10.4 -xOz -Xbga:admin/81 -JJJJv -y /usr/tmp/[email protected] -Xrdf:admin/81 -e DC-ORCL-
    MF-000001 -F1 -f tape -Xrescookie:0xBE1A8F6 -H client01 -u
    13:19:31 RRDF: restore "/wdn/testf" as "/restore", pos 000000010003
    13:19:31 OBTR: running as root/root
    13:19:31 OBTR: record storage set to internal memory
    13:19:31 ATAL: reserved drive tape, cookie 0xBE1A8F6
    13:19:31 OBTR: obsd=1, is_job=1, is_priv=0, os=3
    13:19:31 OBTR: rights established for user admin, class admin
    13:19:31 SUUI: user info root/root, ??/??
    13:19:31 MAIN: using blocking factor 128 from media defaults/policies
    13:19:31 STTY: background terminal I/O or is a tty
    13:19:31 MAIN: interactive
    13:19:31 DOLM: nop (for tape (raw device "/dev/obt1"))
    13:19:31 DOLM: ok
    13:19:32 RLE: connecting to volume/archive database host
    13:19:32 RLE: device tape (raw device "/dev/obt1")
    13:19:32 RLE: mount_info is valid
    13:19:32 RLE: qdb__device_spec_se reports vol_oid 0, arch_oid 0
    13:19:32 A_O: using max blocking factor 128 from media defaults/policies
    13:19:32 A_O: tape device is local
    13:19:32 A_O: Devname: HP,Ultrium 4-SCSI,H61W
    13:19:32 Info version: 11
    13:19:32 WS version: 10.4
    13:19:32 Driver version: 10.4
    13:19:32 Max DMA: 2097152
    13:19:32 Blocksize in use: 65536
    13:19:32 Query frequency: 134217728
    13:19:32 Rewind on close: false
    13:19:32 Can compress: true
    13:19:32 Compression enabled: true
    13:19:32 Device supports encryption: true
    13:19:32 8200 media: false
    13:19:32 Remaining tape: 819375104
    13:19:32 A_GB: ar_block at 0x100352000, size=2097152
    13:19:32 A_GB: ar_block_enc at 0x100554000, size=2097152
    13:19:32 ADMS: reset library tape selection state
    13:19:32 ADMS: reset complete
    13:19:35 ACFD: positioning (SCSI LOCATE) is available for this device
    13:19:35 GLMT: returning "", code = 0x0
    13:19:35 VLBR: from chk_lm_tag: "", code = 0x0
    13:19:35 VLBR: tag on label just read: ""
    13:19:35 VLBR: master tag now ""
    13:19:35 RLE: noticed volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
    13:19:35 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
    (alv) backup image label is valid, file 4, section 1
    (ial) invalidate backup image label (was valid)
    13:19:35 RSMD: rewrote mount db for tape
    13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:19:35 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:19:35 CALE: created backup section oid list entry for oid 369
    13:19:35 PF: here's the label at the current position:
    Volume label:
    Intro time: Fri May 04 13:35:03 2012
    Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Volume ID: DC-ORCL-MF-000001
    Volume sequence: 1
    Volume set owner: root
    Volume set created: Sun Sep 02 11:56:50 2012
    Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
    Volume set expires: Sat Mar 02 11:56:50 2013
    Media family: DC-ORCL-MF
    Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
    Archive label:
    File number: 1
    File section: 1
    Owner: root
    Client host: client01
    Backup level: 0
    S/w compression: no
    Archive created: Sun Sep 02 11:56:50 2012
    Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
    Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
    Encryption: off
    13:19:35 PF: at desired location
    13:19:35 BT: resid is 1
    13:19:35 ACFD: positioning (SCSI LOCATE) is available for this device
    13:19:35 ADMS: reset library tape selection state
    13:19:35 ADMS: reset complete
    13:19:35 GLMT: returning "", code = 0x0
    13:19:35 VLBR: from chk_lm_tag: "", code = 0x0
    13:19:35 VLBR: tag on label just read: ""
    13:19:35 VLBR: master tag now ""
    13:19:35 RLE: noticed volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
    13:19:35 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
    (alv) backup image label is not valid
    13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:19:35 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:19:35 CALE: found existing backup section oid list entry for oid 369
    13:19:35 ADMS: reset library tape selection state
    13:19:35 ADMS: reset complete
    13:19:35 RLE: read volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
    13:19:35 RLE: qdb__read_se reports vol_oid 137, arch_oid 369
    (alv) backup image label is not valid
    13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:19:36 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:19:36 PTNI: positioning to "/wdn/testf" at 000000010003
    13:19:37 CNPC: data host reports this butype_info:
    13:19:37 CNPC: tar (attr 0x2C78: B_DIRECT, R_DIRECT, B_INCR, R_INCR, B_FH_DIR)
    13:19:37 CNPC: DIRECT = y
    13:19:37 CNPC: HISTORY = y
    13:19:37 CNPC: LEVEL = 0
    13:19:38 SNP: connection types supported by mover -
    13:19:38 tcp
    13:19:38 SNP: connection types supported by data service -
    13:19:38 tcp
    13:19:38 SNP: selected data connection type: tcp
    13:19:38 SNP: using separate data and tape/mover connections
    13:19:38 SNP: the NDMP protocol version for tape/mover is 4, for data is 4
    13:19:38 SNP: backup-server's NDMP tape/mover service session id is 7935
    13:19:38 RONPC: begin OSB NDMP data restore
    13:19:38 RONPC: need to restore from "/wdn/testf" tree:
    13:19:38 RONPC: tape position before restore is 000000010004
    13:19:38 MGS: ms.record_size 65536, ms.record_num 0x0, ms.bytes_moved 0x0
    13:19:38 RONPC: about to start restore; here are the environment variables:
    13:19:38 RONPC: env BEGINTREE=1
    13:19:38 RONPC: env NAME=/wdn/testf
    13:19:38 RONPC: env AS=/restore
    13:19:38 RONPC: env POSLEN=6
    13:19:38 RONPC: env POS=
    13:19:38 RONPC: env BLEVEL=0
    13:19:38 RONPC: env FIRSTCH=1
    13:19:38 RONPC: env POS_HERE=1
    13:19:38 RONPC: env EX2KTYPE=
    13:19:38 RONPC: env DATA_BLOCK_SIZE=64
    13:19:38 RONPC: env SKIP_RECORDS=3
    13:19:38 RONPC: env LABEL_VERSION=0000012
    13:19:38 SMW: setting NDMP mover window to offset 0x0, length 0xFFFFFFFFFFFFFFFF
    13:19:38 MLIS: mover listen ok for tcp connection; these addresses were reported:
    13:19:38 MLIS: 192.168.3.109:58303
    13:19:38 MLIS: 0.0.0.0:58303
    13:19:38 RONPC: tape fileno/blockno before restore are 0/0
    13:19:38 APNI: a preferred network interface does not apply to this connection
    13:19:38 DPNI: load balancing is in use, skipping default PNI
    13:19:38 RONPC: directing data service to connect to mover
    13:19:11 PPVL: obtar option OB_JOB = admin/81
    13:19:11 PPVL: obtar option OB_RB = 10.4
    13:19:11 PPVL: obtar option OB_EXTR = 1
    13:19:11 PPVL: obtar option OB_EXTRACT_ONCE = 1
    13:19:11 PPVL: obtar option OB_DEBUG = 1
    13:19:11 PPVL: obtar option OB_DEBUG = 1
    13:19:11 PPVL: obtar option OB_DEBUG = 1
    13:19:11 PPVL: obtar option OB_DEBUG = 1
    13:19:11 PPVL: obtar option OB_VERBOSE = 1
    13:19:11 PPVL: obtar option OB_CLIENT = client01
    13:19:11 PPVL: obtar option OB_HONOR_IN_USE_LOCK = 1
    13:19:11 PPVL: obtar option OB_STAT = 1
    13:19:11 PPVL: obtar option OB_VOLUME_LABEL = 1
    13:19:11 PPVL: obtar option OB_SKIP_CDFS = 1
    13:19:11 PPVL: obtar option OB_DEVICE = tape
    13:19:11 PPVL: obtar option OB_BLOCKING_FACTOR = 128
    13:19:11 PPVL: obtar option OB_VERIFY_ARCHIVE = no
    13:19:11 PPVL: obtar option OB_PQT = 134217728
    13:19:11 DSIN: 2GB+ files are supported, 2GB+ directories are supported
    13:19:11 SETC: identity is already root/root
    13:19:38 qtarndmp__ssl_setup: SSL has been disabled via the security policy
    13:19:38 RONPC: issuing NDMP_DATA_START_RECOVER
    13:19:43 RONPC: started NDMP restore
    13:19:43 MNPO: received NDMP_NOTIFY_DATA_READ, offset 0x0, length 0xFFFFFFFFFFFFFFFF
    13:19:43 MNPO: sent corresponding NDMP_MOVER_READ
    13:19:43 QTOS: received osb_stats message for job admin/81, kbytes 64, nfiles 0
    13:19:43 await_ndmp_event: sending progress update
    13:19:43 SPU: sending progress update
    /restore
    Error: Could not make file /restore: Is a directory
    13:19:44 MNPO: jumped over filemark fence
    13:19:44 VLBR: not at bot: 0x90000000
    13:19:44 VLBR: tag on label just read: ""
    13:19:44 QTOS: received osb_stats message for job admin/81, kbytes 51328, nfiles 0
    13:19:44 VLBR: master tag now ""
    13:19:44 RLE: noticed volume DC-ORCL-MF-000001, file 2, section 1, vltime 1346566310, vowner root, voltag
    13:19:44 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 378
    (alv) backup image label is not valid
    13:19:45 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
    13:19:45 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
    13:19:45 CALE: created backup section oid list entry for oid 378
    13:19:45 VLBR: setting last section flag for backup section oid 369
    13:19:45 MNPO: sent successful mover close
    13:19:45 MNPO: data service halted with reason=internal error
    13:19:45 SNPD: Data Service reported bytes processed 0x3220000
    13:19:45 SNPD: stopping NDMP data service (to transition to idle state)
    13:19:45 MNPO: mover halted with reason=connection closed
    13:19:45 MGS: ms.record_size 65536, ms.record_num 0x322, ms.bytes_moved 0x3220000
    Error: NDMP operation failed: unspecified error reported (see above)
    13:19:45 RONPC: finished NDMP restore with status 97
    13:19:45 RONPC: NDMP read-ahead positioned tape past filemark; backing up
    13:19:45 RONPC: We believe this because initial file # 0 isn't end file # 1
    13:19:45 RONPC: the section-relative block number at end of restore is 0x1
    13:19:45 RONPC: tape position after restore is 000003230080
    13:19:45 QREX: exit status upon entry is 97
    13:19:45 QREX: released reservation on tape drive tape
    13:19:45 RDB: reading volume record for oid 137
    13:19:45 RDB: reading section record for oid 369
    13:19:45 RDB: adding record for oid 369 (file 1, section 1) to section list
    13:19:45 RDB: reading section record for oid 378
    13:19:45 RDB: adding record for oid 378 (file 2, section 1) to section list
    13:19:45 RDB: reading section record for oid 380
    13:19:45 RDB: adding record for oid 380 (file 3, section 1) to section list
    13:19:45 RDB: file 1 has all 1 required sections; clearing incomplete backup flags
    13:19:45 RDB: reading section record for oid 369
    13:19:45 RDB: file 2 has all 1 required sections; clearing incomplete backup flags
    13:19:45 RDB: reading section record for oid 378
    13:19:45 RDB: file 3 has all 1 required sections; clearing incomplete backup flags
    13:19:45 RDB: reading section record for oid 380
    13:19:45 RDB: 1 volumes in volume list
    13:19:45 RDB: volume oid 137 reports first:last files of 1:3
    13:19:45 RDB: marking volume oid 137 as authoritative
    13:19:45 VMA: reading volume record for oid 137
    13:19:45 RLYX: exit status 97; checking allocs...
    13:19:45 RLYX: from mm__check_all: 1
    ob>
    Please help me to resolve the issue...
    Thanks,
    Sam

    If you're restoring a file you have to list it, so if you are restoring /wdn/file01 then you specify the alternate path as /restore/file01
    Thanks
    Rich

  • Bins Failing In OBIEE Report

    So I have a table and Im concerned with 2 fields in that table: Customer ID & Category. Category has the the following values A, B, C and D. I am simply doing a customer count for each category and have created the following bins.
    Category A - Count of all Customer ID's with A
    Category B - Count of all Customer ID's with B
    Category C - Count of all Customer ID's with C
    Category D - Count of all Customer ID's with D
    Category A and B - Count of all Customer ID's with A and B (Failing)
    Whenever I attempt to create the final Category A and B, the report does not even show a line for this. It is not even appearing in the report. Is this a bug? How can this be resolved.

    Hi,
    Bin can only use for grouping based on single column i,e u can group either A or B or C ... but not A and B.
    If u want to group using two categories u can use case statement like "case when ...ur conditions"
    http://www.varanasisaichand.com/2010/01/using-bins-in-obiee.html
    http://obieetutorialguide.blogspot.in/2012/03/conditional-expressions.html
    Nested Case Statements
    Edited by: Sudha777 on Feb 4, 2013 10:21 PM

  • Generate Prime Interface Availability and Utilization Report for unified APs

    Hi,
    I´m trying to generate interface availability and interface utilization report for unified APs on Prime Infrastructure 2.0, but it doesn´t display any information. I have created device health and interface health templates under desing/Monitor configuration/My templates and deployed under Deploy/Monitoring deployment, but it still don´t show any information,
    thaks for your help.

    Hi Alejandro,
    Did you solve this problem? Or is it a bug?
    I face the some issue with you, I just run "Report/Report Launch Pad/Deivce/Interface Utilization"
    and then I create a report for interface utilization.
    But it display nothing when the report run finished.
    I ask some guys in this forum, they said maybe it's a PI2.1 bug.
    BR
    Frank

  • SSRS 2008 R2 - Try to Open RDL - I got an error saying "Failed to open report 'reports_List.rdl' ... "Invalid character in the given encoding. Line 1, position 1".

    Hi,
    I am working on SSRS 2008 R2.
    My Colleague gave me one RDL. I have added it to my SSRS project on BIDS  & Tried  to Open by double-click on that RDL.
    I got a popup error saying "Failed to open report 'reports_List.rdl'. clicked on
    Details button & noticed error explanation as . "Invalid character in the given encoding. Line 1, position 1". 
    When i try to View Code of this RDL, I got another error message saying "Exception from HRESULT: 0x80041FEB"
    Can anybody suggest me ... what exactly the root cause of it?  how can i resolve it ?
    thanks a lot in advance !
    best regards,
    Venkat

    Hi Venkat,
    Did you use Visual Studio 2010 on your test? It is a known issue of Visual Studio 2010. Please refer to the following document. It has fix method provided by the Microsoft Web Development Tools team to work around the issue.
    https://connect.microsoft.com/VisualStudio/feedback/details/552134/hresult-error-creating-timetracking-sample-web-site-project
    Since the issue related to Visual Studio. I suggestion you post the question in the following forum:
    http://social.msdn.microsoft.com/Forums/en-US/home?forum=Vsexpressvb 
    It is appropriate and more experts will assist you.
    Regards,
    Alisa Tang
    Alisa Tang
    TechNet Community Support

  • "Failed to open report" error while running reports using Crystal Report

    We have a web application develeoped in ASP.Net, SQL Server 2005 , Crystal Reports 10.0. Sometimes if a large number of users run reports from their individual nodes they receive an error "Failed to open report ", if we restart the IIS they are able to run the reports. In addition to this the users also sometimes get the error "Maximum Report limit attained". Can any one provide me with a solution other than restarting the IIS.

    Hi,
    Use the close and dispose method for Report object. Its a best practice to code also gc.collect for garbage collection.
    It might help you!!
    Regards,
    Amit

  • "Failed to open report" error while running reports

    We have a web application develeoped in ASP.Net, SQL Server 2005 , Crystal Reports 10.0. Sometimes if a large number of users run reports from their individual nodes they receive an error  "Failed to open report ", if we restart the IIS they are able to run the reports. In addition to this the users also sometimes get the error "Maximum Report limit attained". Can any one provide me with a solution other than restarting the IIS.

    Hi Balla
    If you face this issue only while running the reports from a .Net web application, then please post this thread to the sdk development forum.
    You can follow the link below:
    https://www.sdn.sap.com/irj/sdn/businessobjects-sdk-forums
    Thanks

  • How I could generate an XML file from a report in version 4.0B

    Good morning,
    How could I generate an XML file from a report? Please note that I am using version 4.0B
    I don't have access to
    Billy Vital

    Hi,
            In the Class CL_XML_DOCUMENT,
                 we have a method  EXPORT_TO_FILE to download an XML file.

  • Generate an HTML file from a Report in ABAP

    Good morning,
    How I could generate an HTML file from a report.
    Any Ideas... I have found the function WWW_ITAB_TO_HTML, but someone has the standar code and how use this function?
    Thanks a lot,
    Hernán Restrepo

    Hi,
    I am facing a similar problem.I did try using the function module WWW_ITAB_TO_HTML in the reoprt program, as I'm trying to generate a url from a report, but i'm not able to get the expected results. The code is given below. Could someone please try and help me resolve this issue.Thanks in advance.
    DATA:   emp_name                     TYPE char80.
    DATA:   it_itabex                    TYPE zdb_ex_tty,
            it_emp                       TYPE TABLE OF zis_emp,
            it_org                       TYPE TABLE OF zis_org,
            it_pos                       TYPE TABLE OF zis_pos,
            it_pos_alloc                 TYPE TABLE OF zis_pos_alloc,
            it_res                       TYPE TABLE OF zis_res,
            it_res_alloc                 TYPE TABLE OF zis_res_alloc,
            ls_itabex                    TYPE zdb_ex_s.
    DATA:   lv_filename                  TYPE string,
            lv_path                      TYPE string,
            lv_fullpath                  TYPE string,
            lv_replace                   TYPE i.
    DATA qstring LIKE it_itabex OCCURS 10.
    DATA: url(200), url2(200), url3(200), fullurl(200).
    FIELD-SYMBOLS: <fs_emp>              LIKE LINE OF it_emp,
                   <fs_org>              LIKE LINE OF it_org,
                   <fs_pos>              LIKE LINE OF it_pos,
                   <fs_pos_alloc>        LIKE LINE OF it_pos_alloc,
                   <fs_res>              LIKE LINE OF it_res,
                   <fs_res_alloc>        LIKE LINE OF it_res_alloc.
    Report Program to export data from database to Excel.
    Populate all the tables that have to be exported.
    SELECT * FROM zis_org       INTO TABLE it_org.
    SELECT * FROM zis_pos       INTO TABLE it_pos.
    SELECT * FROM zis_pos_alloc INTO TABLE it_pos_alloc.
    SELECT * FROM zis_emp       INTO TABLE it_emp.
    SELECT * FROM zis_res_alloc INTO TABLE it_res_alloc.
    SELECT * FROM zis_res       INTO TABLE it_res.
    Append the Column Header
    CLEAR ls_itabex.
    ls_itabex-ipp_pos_id            = 'IPP Pos ID'.
    ls_itabex-emp_name              = 'Name'.
    ls_itabex-dt_of_join            = 'JoinedOn'.
    ls_itabex-emp_status            = 'Status'.
    ls_itabex-org_name              = 'Org'.
    ls_itabex-prj_name              = 'Project'.
    ls_itabex-mgr_name              = 'Line'.
    ls_itabex-designation           = 'Designation'.
    ls_itabex-specialization        = 'Specialization'.
    APPEND ls_itabex TO it_itabex.
    Append all the tables into one internal table
    LOOP AT it_pos_alloc ASSIGNING <fs_pos_alloc>.
      CLEAR ls_itabex.
      ls_itabex-ipp_pos_id          = <fs_pos_alloc>-ipp_pos_id.
      READ TABLE it_emp ASSIGNING <fs_emp> WITH KEY emp_guid = <fs_pos_alloc>-emp_guid.
      IF sy-subrc = 0.
        CONCATENATE <fs_emp>-emp_fname <fs_emp>-emp_lname INTO ls_itabex-emp_name  SEPARATED BY space.
        ls_itabex-dt_of_join        = <fs_emp>-dt_of_join.
        ls_itabex-emp_status        = <fs_emp>-emp_status.
        ls_itabex-specialization    = <fs_emp>-specialization.
      ENDIF.
      READ TABLE it_pos ASSIGNING <fs_pos> WITH KEY ipp_pos_id = <fs_pos_alloc>-ipp_pos_id.
      IF sy-subrc = 0.
        ls_itabex-designation       = <fs_pos>-designation.
        READ TABLE it_org ASSIGNING <fs_org> WITH KEY  org_id = <fs_pos>-org_id.
        IF sy-subrc = 0.
          ls_itabex-org_name        = <fs_org>-org_name.
          ls_itabex-mgr_name        = <fs_org>-mgr_name.
        ENDIF.
      ENDIF.
      READ TABLE it_res ASSIGNING <fs_res> WITH KEY org_id = <fs_org>-org_id.
       ls_itabex-org_name         = <fs_org>-org_name.
      APPEND ls_itabex TO it_itabex.
    ENDLOOP.
    url = 'http://testweb/scripts/wgate/zvw10a/!?~language=en'.
    url2 = '&~OkCode(LGON)=LGON&login-login_user='.
    url3 = '&vbcom-vbeln='.
    CONCATENATE url url2 url3 INTO fullurl.
    WRITE: /'Staffing Excel'.
    CALL FUNCTION 'WWW_SET_URL'
      EXPORTING
        offset        = 12
        length        = 10
        func          = fullurl
      TABLES
        query_string  = qstring
      EXCEPTIONS
        invalid_table = 1
        OTHERS        = 2.
    Thanks & Regards,
    Preethi.

  • Volume erase failed: Media kit reports not enough space on device

    I was having problems with the external drive where I store my Time Machine backups, so I tried plugging and unplugging the drive (per earlier advice, and something that had worked in the past), but could not mount the drive. I ran Disk Utility and tried to verify the disk, which told me the disk needed to be repaired. I tried to repair the disk only to get an error message that Disk Utility could not repair the disk and it needed to be reformatted. So I tried to erase the disk with Disk Utility only to get the error message "Volume erase failed: Media kit reports not enough space on this device for requested operation". I am not sure what else to try at this point and could not find any similar questions here.
    The drive in question is a 3TB USB Seagate Backup + Desk Media, formatted as Mac OS Extended (Journaled). It's divided into two partitions, one of which (2.2 TB) holds only Time Machine backups of my desktop and laptop and the other of which (800GB) is formatted similarly with some files stored on it. I was able to verify and repair this other partition. I'm running Disk Utility 13 on an iMac (2.8 GHz Intel Core 2 Duo) running OSX 10.8.5
    Any suggestions for what to try next would be appreciated.

    Just to let you know I appear to have the *exact* same problem, even down to the 3TB hard-drive in question.
    I read recently that a time machine backup should have its own physical hard disk, not a separate partition on an otherwise-used disk. I wish I'd known that in advance of buying the external drive, as I would not have invested in a 3TB one if I had known I could only use it for Time Machine alone.
    Dianeoforegon does make a fair point, though, in saying that all backups in one place is asking for problems down the line.
    Incidentally, this Time Machine problem first started occurring when I upgraded to Mavericks. 9 times out of 10 my Time Machine partition would go corrupt for no reason. This definitely hadn't been happening at all before Mavericks. That same upgrade also killed off my entire Boot Camp partition, which caused me major headaches and I eventually had to simply reformat that partition and start again.

  • How to generate output file of a certain report in a specified directory

    Hello everyone,
    I wanna know how to generate the output file of a certain report in a specified directory.
    Now, the output file directory = /produits/OA/OAS/production/prodcomn/admin/out/PROD_us15k1bp/
    I wanna to generate the output file of a report in this directory
    /produits/OA/OAS/production/prodcomn/admin/out/PROD_us15k1bp/FACTCLIENT
    Cause the output of this report is very large, so I don't wanna to put it in the output drectory standard which reduce the performance.
    Could you help me?
    Thanks a lot,
    Kinkichin

    Hi,
    I wanna know how to generate the output file of a certain report in a specified directory.See (Note: 158088.1 - Is Possible to Redirect the Concurrent Processes Output to a Specific Directory for a Specific Responsibility?).
    Cause the output of this report is very large, so I don't wanna to put it in the output drectory standard which reduce the performance.There should be no performance issues you place the output file in the same directory where other concurrent request output files are located.
    Regards,
    Hussein

  • I generate file (PDF, HTML, etc) the report is empty but I run in paper

    Hi
    I use oracle Report10g
    When I generate file (PDF, HTML, etc) the report is empty but I run in paper design show data, the report in the paper design show many pages, when generate file show only the information of the margin in the main seccion and the body is empty,
    Thanks

    Thanks Daniel for pointing that out.  Though you answer is helpful but I am not sure if that is what I would want to do.
    The link you provide for csv says "For each report there's an _report.xdo file that contains the XML structure of the report... "   It suggests I modify the .xdo file for each report.  I currently have 16 reports.
    Does it mean I modify the .xdo file for all 16  and what happens if someone creates a 17th report ?
    Also, what if I run the same report using different input parameters will that change the xml structure for the report  and therefore will need me to modify the .xdo again ? (I think it should not change the xml structure so the answer should be "NO" to that, unless I change the structure of the report).
    Finally, the link you provide says after doing what it suggest  "Now log back into BI publisher and select the report. You should now be able to see that CSV is now an option."
    CSV should be an option where ? on what screen/page ?  Maybe PDF is already an option for me that I cant see because I do not know where that option is.
    I was hoping there would be something I could do on the xdo_metadata sheet (in the data constraints section or elsewhere) OR in BI Publisher itself as some property of the report.
    I will try out what you suggest any ways.
    M. Jamal

  • Passing values to subreport in SSRS throwing an error - Data Retrieval failed for the report, please check the log for more details.

    Hi,
    I have the subreport calling from the main report. The subreport is based on MDX query agianst the SSAS cube. some dimensions in cube has values 0 and 1.
    when I try to pass '0' to the sub report as the parameter value, it gives an error "Data Retrieval failed for the report, please check the log for more details".
    Actually I am using table for storing these parameter values. In the main report I am calling this table (dataset) and passing these values to subreport.
    so I have given like [0],[1] and this works fine. when I give only either [0] or [1] then it is throwing an error.
    Could you please advise on this.
    Appreciate all and any help.
    Thanks,
    Divya

    Hi Divya,
    Based on the current description, I understand that there is no issue if you pass two values from main report to subreport, while the issue occurs when passing one value to subreport.
    To narrow down the issue, I want to confirm whether the subreport can run if there is only [0] or [1] in the subreport. If so, it indicates the query statements exist error in the subreport. If it’s not the case, this shows the issue occurs during passing
    values from main report to subreport. To make further analysis, please post the details of query statements of the subreport to the forum.
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

Maybe you are looking for