Add zfs dataset to zonecluster (sc 3.2)

Hi to all,
I have a two node Sun Cluster 3.2u2 (Solaris 10u7 - sparc).
I added a dataset (pool_sushi) to the zonecluster (following the instructions at [http://docs.sun.com|http://docs.sun.com/app/docs/doc/820-4677/ggyww?a=view], but the zpool is not accesible from the zones, in fact Solaris mount it at /pool_sushi in the global zone.
I have created also a HAStoragePlus resource inside the zonecluster to manage the zpool, i can switch the zpool between the two nodes, but, as before, it always get mounted on the global zone.
is this the correnct behavior? or i am doing something wrong?
uw-ma03 # zpool status pool_sushi
pool: pool_sushi
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool_sushi ONLINE 0 0 0
c1t32d1 ONLINE 0 0 0
errors: No known data errors
uw-ma03 # clzc status
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Node Name Zone HostName Status Zone Status
sushi uw-ma02 maki Online Running
uw-ma03 nigiri Online Running
uw-ma03 # clzc configure sushi
clzc:sushi> info dataset
dataset:
name: pool_sushi
uw-ma03 # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool_sushi 19.9G 138K 19.9G 0% ONLINE /export/zones/sushi/root
rpool 68G 6.79G 61.2G 9% ONLINE -
tank 68G 709M 67.3G 1% ONLINE -
uw-ma03 # zlogin sushi
[Connected to zone 'sushi' pts/2]
Last login: Wed Sep 2 12:01:22 on pts/2
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
nigiri # zpool list
no pools available
nigiri #
nigiri # cluster status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
maki Online
nigiri Online
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
prueba-rg nigiri No Online
maki No Offline
=== Cluster Resources ===
Resource Name Node Name State Status Message
prueba-hastp nigiri Online Online
maki Offline Offline
thank you!
Edited by: mrcarrasco on Sep 2, 2009 3:10 AM, fix ugly formatting!

The problem looks with the zpool behavior from Solaris perspectivei think you may be right, i created a zpool (tp) in other LUN, and:
uw-ma02 # zpool create tp c3t600C0FF00000000007FF587423A4A102d0
uw-ma02 # zfs list
tp                                110K   181G    18K  /tp
uw-ma02 # zpool export tp
uw-ma02 # mkdir /altroot
uw-ma02 # zpool import -R /altroot tp
uw-ma02 # zfs list
tp                                111K   181G    18K  /tpthe zpool and zfs parameters apperar ok:
uw-ma02 # zpool get all tp
NAME  PROPERTY     VALUE       SOURCE
tp    size         184G        -
tp    used         116K        -
tp    available    184G        -
tp    capacity     0%          -
tp    altroot      /altroot    local
tp    health       ONLINE      -
tp    guid         8671829121781431947  -
tp    version      10          default
tp    bootfs       -           default
tp    delegation   on          default
tp    autoreplace  off         default
tp    cachefile    none        local
tp    failmode     wait        default
uw-ma02 # zfs get all tp
NAME  PROPERTY         VALUE                  SOURCE
tp    type             filesystem             -
tp    creation         Wed Sep 30 12:42 2009  -
tp    used             111K                   -
tp    available        181G                   -
tp    referenced       18K                    -
tp    compressratio    1.00x                  -
tp    mounted          yes                    -
tp    quota            none                   default
tp    reservation      none                   default
tp    recordsize       128K                   default
tp    mountpoint       /tp                    default
tp    sharenfs         off                    default
tp    checksum         on                     default
tp    compression      off                    default
tp    atime            on                     default
tp    devices          on                     default
tp    exec             on                     default
tp    setuid           on                     default
tp    readonly         off                    default
tp    zoned            off                    default
tp    snapdir          hidden                 default
tp    aclmode          groupmask              default
tp    aclinherit       restricted             default
tp    canmount         on                     default
tp    shareiscsi       off                    default
tp    xattr            on                     default
tp    copies           1                      default
tp    version          3                      -
tp    utf8only         off                    -
tp    normalization    none                   -
tp    casesensitivity  sensitive              -
tp    vscan            off                    default
tp    nbmand           off                    default
tp    sharesmb         off                    default
tp    refquota         none                   default
tp    refreservation   none                   default
uw-ma02 #and (of course) there is nothing interesting in /var/adm/messages!
Mario

Similar Messages

  • ZFS dataset issue after zone migration

    Hi,
    I thought I'd document this as I could not find any references to people having run into this problem during zone migration.
    Last night I moved a full-root zone from a Solaris 10u4 host to a Solaris 10u7 host. It has a delegated zfs pool.
    The migration was smooth, with a zoneadm halt, followed by a zoneadm detach on the other node.
    An unmount of the ufs SAN LUN (which contained the zone root) on host A and a mount on host B (which is sharing the storage between the two nodes).
    The zoneadm attach worked after complaining about missing patches and packages (since the zone was Solaris 10 u4 as well).
    A zoneadm attach -F started the zone on host B, but did not detect the ZFS pool.
    After searching for possible fixes, trying to identify the issue, I halted the zone again on host B and did a zoneadm attach -u (which upgraded the zone to u7).
    At which point, a zoneadm attach and zoneadm boot resulted in the ZFS dataset being visible again...
    In all a smooth process, but I got a couple of gray hairs on my head trying to figure out what the problem with seeing the dataset after force-attaching the zone was...
    Any insights from Sun Gurus are welcome.

    I am looking at a similar migration scenario, so my question is did you get the webserver back up as well?
    Cheers,
    Davy

  • Add zfs volume to Solaris 8 branded zone

    Hi,
    I need to add a zfs volume to a Solaris 8 branded zone.
    Basically ive created the zvol and added the following to the zone configuration.
    # zonecfg -z test
    zonecfg:test> add device
    zonecfg:test:device> set match=/dev/zvol/dsk/sol8/vol
    zonecfg:test:device> end
    When I boot the zone it comes up ok but I am unable to see the device, nothing in format, /dev/dsk etc etc
    Ive also tried to setmatch to the raw device as well to no avail.
    Basically I have numerous zvols to add and dont really want a load of mount points from the global zone then lofs back to the local zone
    Any ideas please??
    Thanks...

    Thanks but that's why I created zfs volumes and newfs'ed them to create UFS and presented those to the zone.
    In the end I just create a script in /etc/rc2.d and mounted the filesystems in there.

  • Zfs snapshot of "zoned" ZFS dataset

    I have a ZFS (e.g. tank/zone1/data) which is delegated to a zone as a dataset.
    As root in the global zone, I can "zfs snapshot" and "zfs send" this ZFS:
    zfs snapshot tank/zone1/data and zfs send tank/zone1/data without any problem. When I "zfs allow" another user (e.g. amanda) with:
    zfs allow -ldu amanda mount,create,rename,snapshot,destroy,send,receivethis user amanda CAN DO zfs snapshot and zfs send on ZFS filesystems in the global zone, but it can not do these commands for the delegated zone (whilst root can do it) and I get a permission denied. A truss shows me:
    ioctl(3, ZFS_IOC_SNAPSHOT, 0x080469D0)          Err#1 EPERM [sys_mount]
    fstat64(2, 0x08045BF0)                          = 0
    cannot create snapshot 'tank/zone1/data@test'write(2, " c a n n o t   c r e a t".., 53) = 53Which setting am I missing to allow to do this for user amanda?
    Anyone experiencing the same?
    Regards,
    Marcel

    Hi Robert,
    Thanks for your response. I suspected this might be the case, but it seems like I get conflicting information from the Sun website. It still says recommended and security patches are free everywhere I looked except when I went to download them. We got this machine in October and I obtained and installed a recommended patch cluster as well as a bunch of ZFS patches (it might have even been early November, shortly before the update), using only a valid account with no contract.
    It would have been nice to know the policy on patch clusters was changing shortly, since now I want to use the snapshots as a backup for users.
    For us at least, an upgrade install would be a royal pain in the butt, since this machine is sitting in a data center in the basement and that would entail me signing in there and sitting on the floor while it installs from DVD media.

  • How to add two datasets inside a same tablix?

    Hi Everyone,
    I want to create a report as using project server datas with two datasets in the same tablix.my report has three grouping by Months,Resources, and Resource Role.my dataset has resource name parameter.when i selected a few resource as filter,LookupSet function
    gives me an array.But it is taking first value of array for each resource.Actually it must get related value for resource in the array.

    Hi Visakh,
    i tried to merge them at backend.but it gave me the wrong results.because resource has 168 hours capacity at selected date range.
    For example:
    i selected dates between 01.11.2013 and 30.11.2013.there are 3 project at this date range for this Resource.but it gave me less result for capacity.i think that it is bringing the capacity according to the results in Epm_AssignmentByDay_Userview table matching.but
    it must bring the result according to selected date range.
    My SP:
    USE [ProjectServer_Reporting_Canli]
    GO
    /****** Object:  StoredProcedure [dbo].[CSP_ResourceCapacityControl]    Script Date: 01/26/2014 04:31:31 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    ALTER PROCEDURE [dbo].[CSP_ResourceCapacityControl]
    @Resources VARCHAR(4000),
    @ResourceRole VARCHAR(4000),
    @DateStart DATETIME,
    @DateEnd DATETIME,
    @Programme VARCHAR(4000)
    AS
    BEGIN
    SELECT
    TimeByDay,
    ResourceUID,
    ResourceName,
    Capacity,
    ProjectUID,
    ProjectName,
    [Project  Type.Proje Türü],
    [Function.Fonksiyon],
    [Proje Durum],
    ProjectStartDate,
    ProjectFinishDate,
    [Role.Rol],
    [Speciality.Uzmanlık],
    Rol_KaynakAnalizi,
    [OutSource Company.Dış Kaynak Firma],
    [Source.Kaynak],
    [programme.program],
    SUM(BaselineAssignmentWork) AS BaselineWork,
    SUM(AssignmentActualWork) AS ActualWork,
    SUM(AssignmentWork) AS AssignmentWork,
    SUM(AssignmenRPlanWork) AS AssginmenRPlanWork
    FROM
    (SELECT
    ABD.TimeByDay,
    RUV.ResourceName,
    PUV.ProjectUID,
    PUV.ProjectName,
    PUV.[Project  Type.Proje Türü],
    PUV.[Function.Fonksiyon],
    PUV.[Proje Durum],
    PUV.ProjectStartDate,
    PUV.ProjectFinishDate,
    RUV.ResourceUID,
    RUV.[Role.Rol],
    RUV.[Speciality.Uzmanlık],
    RUV.Rol_KaynakAnalizi,
    RUV.[OutSource Company.Dış Kaynak Firma],
    RUV.[Source.Kaynak],
    PUV.[programme.program],
    RDB.Capacity,
    SUM(ABD.AssignmentBaseline0Work)AS BaselineAssignmentWork,
    SUM(ABD.AssignmentWork) AS AssignmentWork,
    SUM(ABD.AssignmentActualWork) AS AssignmentActualWork,
    SUM(ABD.AssignmentResourcePlanWork) AS AssignmenRPlanWork
    FROM
    dbo.MSP_EpmAssignmentByDay_UserView AS ABD
    INNER JOIN
    dbo.MSP_EpmAssignment_UserView AS AUV
    ON
    ABD.AssignmentUID = AUV.AssignmentUID
    INNER JOIN
    dbo.MSP_EpmProject_UserView AS PUV
    ON
    PUV.ProjectUID = AUV.ProjectUID
    INNER JOIN
    dbo.MSP_EpmResourceByDay_UserView RDB
    ON
    RDB.ResourceUID=AUV.ResourceUID and RDB.TimeByDay=ABD.TimeByDay
    INNER JOIN
       dbo.MSP_EpmResource_UserView AS RUV
    ON
    RUV.ResourceUID = AUV.ResourceUID
    INNER JOIN
    (SELECT ObjectUID
    FROM dbo.CF_SQLParameterINOperatorInStoreProcedure(@Resources)) As ParamResourceUIDs
    ON
    RUV.ResourceUID = ParamResourceUIDs.ObjectUID
    INNER JOIN
    (SELECT ObjectUID
    FROM dbo.CF_SQLParameterINOperatorInStoreProcedure(@ResourceRole)) As ParamResourceRoleID
    ON
    RUV.Rol_KaynakAnalizi = ParamResourceRoleID.ObjectUID
    INNER JOIN
    (SELECT ObjectUID
    FROM dbo.CF_SQLParameterINOperatorInStoreProcedure(@Programme)) As ParamProgramme
    ON
    PUV.[programme.program]=ParamProgramme.ObjectUID
    WHERE
    (RDB.TimeByDay BETWEEN (@DateStart) and (@DateEnd))
    --AND
    --ABD.AssignmentActualWork <> 0
    GROUP BY
    ABD.TimeByDay,
    PUV.ProjectName,
    PUV.[Project  Type.Proje Türü],
    PUV.[Function.Fonksiyon],
    PUV.[Proje Durum],
    PUV.ProjectStartDate,
    PUV.ProjectFinishDate,
    RUV.ResourceUID,
    PUV.ProjectUID,
    RUV.[Role.Rol],
    RUV.[Speciality.Uzmanlık],
    RUV.Rol_KaynakAnalizi,
    RUV.[OutSource Company.Dış Kaynak Firma],
    RUV.[Source.Kaynak],
    PUV.[programme.program],
    RDB.Capacity,
    RUV.ResourceName) AS Temp
    --INNER JOIN
    -- (SELECT ResourceRoleID
    --    FROM
    -- dbo.CF_SQLParameterINOperatorResourceBolum(@ResourceRole)) As ParamResourceRole
    -- ON
    -- (RUV.Role_KaynakAnalizi = ParamResourceRole.ResourceRoleID)
    GROUP BY
    TimeByDay,
    ResourceName,
    ResourceUID,
    ProjectName,
    [Project  Type.Proje Türü],
    [Function.Fonksiyon],
    [Proje Durum],
    ProjectStartDate,
    ProjectFinishDate,
    ProjectUID,
    [Role.Rol],
    [Speciality.Uzmanlık],
    Rol_KaynakAnalizi,
    [OutSource Company.Dış Kaynak Firma],
    [Source.Kaynak],
    Capacity,
    [programme.program]
    END
    RETURN 0

  • Unable to add a device (e.g. /dev/cua0) to a non-global zone

    Hi,
    I've installed solaris 10u4 on a x86 machine with the latest patches, installed with the smpatch utility
    The history:
    I've installed solaris 10u3 without any patches, a quite minimum installation; I 've created a non-global zone, added a zfs dataset, added networking, add one serial device (/dev/cua0); installed hylafax from blastwave in the created zone using the attached modem on /dev/cua0, all was working fine, except some sendmail issues. Due to issues with samba, which I needed on this machine, I've tried to update the machine, after ending up in dependency hell, due the minimum installation, I gave up. I did a fresh install of solaris 10u4 instead also with latest patches applied with the smpatch utility, the I've created a new zone and want to add the device /dev/cua0 like in the s10u3 installation, but the device doesn't appear in the non-global zone, so I've installed hylafax in the global-zone temporary.
    The question, any ideas or workarrounds to bring the async device into a non-global zone again ?
    I'm not a newbie in nix like systems (several years with BSD and GNU/Linux), but for solaris I would classify myself as a newbie ;-)
    thanks in advance.

    Hmm. If that didn't work, then it's possible you're running into a different problem.
    But I checked again and this is the one I was thinking of. Toward the bottom, some patches are referenced. I suppose they won't hurt, but I'm worried you're seeing something related to the 'cua' device rather than the general problem of device creation.
    http://www.opensolaris.org/jive/thread.jspa?messageID=171187
    Darren

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • New Dataset button disabled in Lumira

    Hello All,
    We have installed licensed version of Lumira desktop. We are using version 1.17.2. When we connect to HANA system, in the 'Prepare' tab, the option to add 'New dataset' is disabled. However if we use Lumira to connect to SQL via ODBC, the option becomes enabled.
    Also there multiple options under the 'Share' tab like Export as file, Publish to SAP HANA,publish to explorer, Lumira cloud, Lumira server, SAP Streamwork and publish to SAP BI available when we connect to SQL. However none of these options are available when we connect to HANA expect, publish to lumria server.
    Has anyone faced similar issues? We are currently user HANA sp6 and we have not installed Lumira server on BI. In such situations is there a way to share the Lumira content among the users?
    Also when I click on Help-> Enter Keycode, it says you can enter a keycode to unlock disabled features, but it also says I have installed SAP Lumira Perpetual licence. Is there any other license we need to install ?
    Thanks,
    Aamod.

    Hi,
    when working in HANA Online mode you are not acquiring any data locally into the local Sybase instance on the desktop.  That's why you cannot enrich/transform the data (in prepare room).
    With regards to Sharing, I presume you have a Story you want to publish, based on a HANA Online source. For reasons of security and governance, that data should not leave that HANA instance... Therefore, you can only publish to the Lumira Server which is hosted on that same HANA server.
    Regards,
    H

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • How to create 2 conditions from 2 datasets in row visibility SSRS

    Hi Experts, 
     In ssrs, I want to create an expression for the row visibility. But, the expression will contain 2 conditions from different 2 datasets (DealStarts & RowofTrendingVisibility). I have applied a solution from online, but got an error message is 
    "The Visibility.Hidden expression for the tablix ‘Tablix9’ contains an error: [BC30451] Name 'launchdate' is not declared. "
    I think that there is a minor issue in my syntax. Can some one help me to correct it?  Thank you. 
    =iif ((Last(MonthName("DealStarts"))=monthname(month(today())) or launchdate ("RowofTrendingVisibility")<Parameters!StartDate.Value),true, false)

    Hi JTan414,
    I have check the expression you have provided and there are many incorrect using of this expression, please check and do some modification as below:
    For the MonthName function you should follow the grammar like:
    =MonthName(10,True)
    =MonthName(Month(Fields!BirthDate.Value),False)
    ="The month of your birthday is " & MonthName(Month(Fields!BirthDate.Value))
    You have add the dataset name "DealStarts" at the wrong place, it should like below:
    =Last(MonthName(Month(Fields!BirthDate.Value)),"DealStarts")
    or
    =Last(MonthName(Fields!IntegerValueField.Value),"DealStarts")
    This will return the last value from the dataset
    There is no function named launchdate, expression "launchdate ("RowofTrendingVisibility")" is incorrect and you shouldn't add the datasetname in this way.
    If the expression involved of conditions from two different datasets, limit kinds of functions will support to use:
     =Last(Fields!BirthDate.Value,"DataSetName")
    =First(Fields!BirthDate.Value,"DataSetName")
    More details information about how to use expression in SSRS:
    Expression Examples (Report Builder and SSRS)
    If you still have any problem, please try to provide more details information about your requirements and also some sample data.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Interface using add/remove data set i.e set operator

    Hello...Experts,
    I have another problem on add/remove dataset.
    Suppose I have 3 source tables having same structure. let's say TAB1(empno,ename,deptno),TAB2(empno,ename,deptno) and TAB3(empno,ename,deptno),but the values in tables are different, and target table TARGET_TAB(empno,ename,deptno,v_ename,v_empno) and I want to map using Union and Intersection operator.
    My mapping is like this:
    Dataset1
    TAB1 TARGETTAB_
    empno--------------- empno
    ename--------------- ename
    deptno-------------- deptno
    v_ename
    v_empno
    Dataset2
    TAB2 TARGETTAB_
    empno----------------- empno
    ename
    deptno----------------- deptno
    ename------------------ v_ename
    v_empno
    Dataset3
    TAB2 TARGETTAB_
    empno
    ename-------------------ename
    deptno------------------deptno
    v_ename
    empno-------------------v_empno
    That means one column from TAB2 and TAB3 is mapping with one extra column of TARGET_TAB table.
    when i am performing this it and click on flow it showing the error ODI-20350: your diagram contains one or more FATAL/CRITICAL errors.
    Is it possible to do such interface??? How to do that interface???and what action to take to avoid the error???
    Please refer me.
    thanks.

    Hello Sir...
    when i am taking different no of columns for different tables, then flow is not working as it is showing the fatal error as i ve told before..
    But when I took same no of columns for all tables with null values in TAB1(sal and job is null) and TAB2(job is null), after data flow no data is loaded into target, as there is an intersection and null value in table.
    For example:
    "select empno,ename,deptno,sal,job from TAB1 where deptno=10 union
    select empno,ename,deptno,sal,job from TAB2 where sal>1500 intersect
    select empno,ename,deptno,sal,job from TAB3 where job='manager' ";
    This query return "0" rows selected ,as sal and job column is null and due to intersect total no of rows become '0'.
    without taking null value columns in TAB1(sal,job) and TAB2(job), can we make interface???
    thanks.
    Edited by: soumya.rn on Mar 29, 2012 4:41 AM
    Edited by: soumya.rn on Mar 29, 2012 5:17 AM

  • Interface with 2 datasets

    Hi!
    I would like create interface with 2 datasets. But I have problem, that I can't map all fields for both datasets.
    By Example:
    I have target table with 3 fields: test_tab (f1, f2 f3)
    from one dataset i want to load fields f1 and f2, but from second dataset fields f2 and f3.
    Dataset operator is UNION ALL
    I have got error message because all fields are not mapped in both datasets.
    How to make it work without mapping for all fields?
    Thanks in advance!

    Please follow the steps
    step 1. On the top of the Interface , you will find Add/ Remove Datasets . click that
    Step 2. Now add two datasets and use Union All in second dataset operator.
    Step 3. Drag your source1 in Data sets 1 and map accordingly
    Step 4. Drag in your Source 2 in Data sets 2 and map accordingly.
    Step 5. Select the required IKM and run that should work

  • ZFS root and Live upgrade

    Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
    merill

    Hey man.
    I bad my head against the wall with the same question :-)
    One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
    Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
    When you create the new ZFS based BE the /app is shared.
    In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
    then run lucreate to create the ZFS BE.
    After that, create a new dataset for /app, just give it a different mountpoint.
    Copy all your stuff.
    rename the original /app
    and set the dataset's mountpoint
    this is it, all your stuff are now on ZFS.
    Hope it will be usefull,

  • Programaticaly add Table in CR database fields

    Hi to all,
               I'm using CR 11.5, Visual Studio 2005-C#,asp.net and SQL server 2005.
               My work is during runtime to add the Fieldobjects in
    crystal report now i can do it but during design time i added
    the Table in Crystal Report ->FieldObjects>Database Fields
    by that programaticaly i can add the Fieldobjects.
    What i try to do is  
    to add the table to CR Database fields by programaticaly.
    Is this possible then experts please help me.
    Thanks in advance

    I found a way like this
    private void AddTableFromDataSet(ref CrystalDecisions.CrystalReports.Engine.ReportDocument rpt, System.Data.DataSet ds)
                ISCDReportClientDocument rasReport = rpt.ReportClientDocument;
                // Convert the DataSet to an ISCRDataset object (something the ISCDReportClientDocument can understand)
                CrystalDecisions.ReportAppServer.DataDefModel.ISCRDataSet rasDS;
                rasDS = CrystalDecisions.ReportAppServer.DataSetConversion.DataSetConverter.Convert(ds);
                // Add the dataset as a data source to the report
                rasReport.DatabaseController.AddDataSource((object)rasDS);
                // Add a field to the report canvas
                // Note: This is quick and dirty. No positioning, resizing, formatting, etc.
                CrystalDecisions.ReportAppServer.Controllers.ISCRResultFieldController rfc;
                CrystalDecisions.ReportAppServer.DataDefModel.ISCRTable crTable;
                CrystalDecisions.ReportAppServer.DataDefModel.ISCRField crField;
                rfc = rasReport.DataDefController.ResultFieldController;
                crTable = rasReport.Database.Tables[0];
                crField = crTable.DataFields[2];    // Hardcoded field "Customer Name" in the Customer table from Xtreme Sample Database
                rfc.Add(-1, crField);
                // Save the report template to disk (without data)
                //object path = @"c:\documents and settings\administrator\desktop\";
                //rasReport.SaveAs("test.rpt", ref path, 0);
                //MessageBox.Show("Done!");

  • Solaris 11: configuring/installing/verifying zone: dataset does not exist

    Hello all, I am working my way through setting up a dataset via the instructions listed here
    http://docs.oracle.com/cd/E23824_01/html/821-1460/z.conf.start-85.html
    http://docs.oracle.com/cd/E23824_01/html/821-1460/z.inst.task-2.html
    I am now trying to verify my zone (*zoneadm -z stfsun1 verify*) and I get the following error message:
    "*could not verify zfs dataset waas/stfsun1: dataset does not exist*
    *zoneadm: zone stfsun1 failed to verify*"
    I did a search but nothing of benefit showed up. Can anyone point me in the right direction?
    Edited by: thisisbasil on May 22, 2012 9:04 AM

    zonecfg -z stfsun1 info
    zonename: stfsun1
    zonepath: /zones/stfsun1
    brand: solaris
    autoboot: true
    bootargs: -m verbose
    file-mac-profile:
    pool:
    limitpriv: default,sys_time
    scheduling-class: FSS
    ip-type: exclusive
    hostid: 80825649
    fs-allowed:
    [max-sem-ids: 10485200]
    fs:
         dir: /usr/local
         special: /opt/local
         raw not specified
         type: lofs
         options: []
    anet:
         linkname: net0
         lower-link: auto
         allowed-address not specified
         configure-allowed-address: true
         defrouter not specified
         allowed-dhcp-cids not specified
         link-protection: mac-nospoof
         mac-address: random
         mac-prefix not specified
         mac-slot not specified
         vlan-id not specified
         priority not specified
         rxrings not specified
         txrings not specified
         mtu not specified
         maxbw not specified
         rxfanout not specified
    anet:
         linkname: net1
         lower-link: auto
         allowed-address not specified
         configure-allowed-address: true
         defrouter not specified
         allowed-dhcp-cids not specified
         link-protection: mac-nospoof
         mac-address: random
         mac-prefix not specified
         mac-slot not specified
         vlan-id not specified
         priority not specified
         rxrings not specified
         txrings not specified
         mtu not specified
         maxbw not specified
         rxfanout not specified
    device:
         match: /dev/wifi/*
         allow-partition not specified
         allow-raw-io not specified
    device:
         match: /dev/ipnet/*
         allow-partition not specified
         allow-raw-io not specified
    device:
         match: /dev/*dsk/*
         allow-partition not specified
         allow-raw-io: true
    dedicated-cpu:
         ncpus: 1
         importance: 10
    capped-memory:
         physical: 1G
         [swap: 2G]
         [locked: 500M]
    attr:
         name: comment
         type: string
         value: "This is the CodeTEST work zone."
    dataset:
         name: waas/stfsun1
         alias: stfsun1
    rctl:
         name: zone.max-swap
         value: (priv=privileged,limit=2147483648,action=deny)
    rctl:
         name: zone.max-locked-memory
         value: (priv=privileged,limit=524288000,action=deny)
    rctl:
         name: zone.max-sem-ids
         value: (priv=privileged,limit=10485200,action=deny)
    Edited by: thisisbasil on May 22, 2012 9:56 AM

Maybe you are looking for

  • HT204053 multiple icloud accounts on one apple id

    I have my two boys connected to my Itunes account and Apple ID.  Is there a way that they can have their own icloud accounts so that everything on their phone is not mixed up with what is on my phone?

  • Posting to Production Order Field during WIP Calc in Classic Ledger FI/PCA

    During our weekly Work In Process Calculation job (SAP Transaction KKAO) the following journal entry posts correctly per our company's SAP configuration: Debit/Credit Work In Process Change (Income Statement) Debit/Credit Work in Process Inventory (B

  • Condition type manually removed in back-end

    Hi! I am creating a purchase order in SRM.  The order is created with a foreign supplier. When replicated to back-end and looked at in ME23N the condition GRWR is 0 when it should be the same amount as the Gross price. When doing an analysis of the P

  • Current Proccesor in normal correction activity

    Dear all helpers. Please, what is actual role for Current processor in activity normal correction? Is it developer? Or is it simple the one, who proccess the activity NOW, I this role is changing during life cycle of correction? My concern is, that t

  • Help with image border

    How do I turn image 1 with a white background into image two so that it just has a white border (I don't want the red background, just to edge out the white border. I appreciate any help provided!