Search Google Appliance

Publications

| 1975 | 1976 | 1977 | 1978 | 1979 | 1980 | 1981 | 1982 | 1983 | 1984 | 1985 | 1986 | 1987 | 1988 | 1989 | 1990 | 1991 | 1992 | 1993 | 1994 | 1995 | 1996 | 1997 | 1998 | 1999 | 2000 | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017

Chapters

Adams, J., Sibbritt, D., Broom, A., Kroll, T., Prior, J., Dunston, R., Leung, B., Davidson, P. & Andrews, G. 2017, 'Traditional, complementary and integrative medicine as self-care in chronic illness' in Adams, J. & et al (eds), Public Health and Health Services Research in Traditional, Complementary and Integrative Medicine: International Perspectives, Imperial College Press, London.

Stoianoff, N.P., Cahill, A. & Wright, E.A. 2017, 'Indigenous knowledge: what are the issues?' in Stoianoff, N.P. (ed), Indigenous Knowledge Forum: Comparative Systems for Recognising and Protecting Indigenous Knowledge and Culture, LexisNexis, pp. 11-37.

Wright, E.A., Cahill, A. & Stoianoff, N.P. 2017, 'Australia and Indigenous traditional knowledge' in Stoianoff, N.P. (ed), Indigenous Knowledge Forum: Comparative Systems for Recognising and Protecting Indigenous Knowledge and Culture, LexisNexis, pp. 39-68.

Journal articles

Anderson, C. & Ryan, L.M. 2017, 'A Comparison of Spatio-Temporal Disease Mapping Approaches Including an Application to Ischaemic Heart Disease in New South Wales, Australia.', Int J Environ Res Public Health, vol. 14, no. 2.
View/Download from: UTS OPUS or Publisher's site
View description>>

The field of spatio-temporal modelling has witnessed a recent surge as a result of developments in computational power and increased data collection. These developments allow analysts to model the evolution of health outcomes in both space and time simultaneously. This paper models the trends in ischaemic heart disease (IHD) in New South Wales, Australia over an eight-year period between 2006 and 2013. A number of spatio-temporal models are considered, and we propose a novel method for determining the goodness-of-fit for these models by outlining a spatio-temporal extension of the Moran's I statistic. We identify an overall decrease in the rates of IHD, but note that the extent of this health improvement varies across the state. In particular, we identified a number of remote areas in the north and west of the state where the risk stayed constant or even increased slightly.

Awwad, S. & Piccardi, M. 2017, 'Prototype-based budget maintenance for tracking in depth videos', Multimedia Tools and Applications, pp. 1-16.
View/Download from: Publisher's site
View description>>

© 2016 Springer Science+Business Media New YorkThe use of conventional video tracking based on color or gray-level videos often raises concerns about the privacy of the tracked targets. To alleviate this issue, this paper presents a novel tracker that operates solely from depth data. The proposed tracker is designed as an extension of the popular Struck algorithm which leverages the effective framework of structural SVM. The main contributions of our paper are: i) a dedicated depth feature based on local depth patterns, ii) a heuristic for handling view occlusions in depth frames, and iii) a technique for keeping the number of the support vectors within a given “budget” so as to limit computational costs. Experimental results over the challenging Princeton Tracking Benchmark (PTB) dataset report a remarkable accuracy compared to the original Struck tracker and other state-of-the-art trackers using depth and RGB data.

Chai, R., Ling, S.H., San, P.P., Naik, G., Nguyen, T.N., Tran, Y., Craig, A. & Nguyen, H.T. 2017, 'Improving EEG-based Driver Fatigue Classification using Sparse-Deep Belief Networks', Frontiers in Neuroscience, vol. 11, no. 103, pp. 1-14.
View/Download from: UTS OPUS or Publisher's site
View description>>

This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN) and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6% and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3% and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8%, 9.5% and 2.5% over ANN, BNN and DBN classifiers respectively.

Chai, R., Naik, G., Nguyen, T.N., Ling, S., Tran, Y., Craig, A. & Nguyen, H. 2017, 'Driver Fatigue Classification with Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-based System.', IEEE journal of biomedical and health informatics, vol. 21, no. 3, pp. 715-724.
View/Download from: UTS OPUS or Publisher's site
View description>>

This paper presents a two-class electroencephalography (EEG)-based classification for classifying of driver fatigue (fatigue state vs. alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8% and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor) and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC=0.93) against other methods such as power spectral density (PSD) as feature extractor (AUC-ROC=0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.

Chai, R., Naik, G.R., Ling, S.H. & Nguyen, H.T. 2017, 'Hybrid brain–computer interface for biomedical cyber-physical system application using wireless embedded EEG systems', BioMedical Engineering OnLine, vol. 16, no. 5, pp. 1-23.
View/Download from: UTS OPUS or Publisher's site

Chen, Z., You, X., Zhong, B., Li, J. & Tao, D. 2017, 'Dynamically Modulated Mask Sparse Tracking', IEEE Transactions on Cybernetics.
View/Download from: Publisher's site
View description>>

Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.

Chiarella, C., He, X.Z., Shi, L. & Wei, L. 2017, 'A behavioural model of investor sentiment in limit order markets', Quantitative Finance, vol. 17, no. 1, pp. 71-86.
View/Download from: Publisher's site
View description>>

© 2016 Informa UK Limited, trading as Taylor & Francis GroupBy incorporating behavioural sentiment in a model of a limit order market, we show that behavioural sentiment not only helps to replicate most of the stylized facts in limit order markets simultaneously, but it also plays a unique role in explaining those stylized facts that cannot be explained by noise trading, such as fat tails in the return distribution, long memory in the trading volume, an increasing and non-linear relationship between trade imbalance and mid-price returns, as well as the diagonal effect, or event clustering, in order submission types. The results show that behavioural sentiment is an important driving force behind many of the well-documented stylized facts in limit order markets.

Chomsiri, T., He, X.S., Nanda, P. & Tan, Z. 2017, 'Hybrid Tree-rule Firewall for High Speed Data Transmission', IEEE Transactions on Cloud Computing.
View/Download from: UTS OPUS or Publisher's site

Dai, M., Cheng, S. & He, X.S. 2017, 'Hybrid generative–discriminative hash tracking with spatio-temporal contextual cues', Neural Computing and Applications.
View/Download from: UTS OPUS or Publisher's site
View description>>

Visual object tracking is of a great application value in video monitoring systems. Recent work on video tracking has taken into account spatial relationship between the targeted object and its background. In this paper, the spatial relationship is combined with the temporal relationship between features on different video frames so that a real-time tracker is designed based on a hash algorithm with spatio-temporal cues. Different from most of the existing work on video tracking, which is regarded as a mechanism for image matching or image classification alone, we propose a hierarchical framework and conduct both matching and classification tasks to generate a coarse-to-fine tracking system. We develop a generative model under a modified particle filter with hash fingerprints for the coarse matching by the maximum a posteriori and a discriminative model for the fine classification by maximizing a confidence map based on a context model. The confidence map reveals the spatio-temporal dynamics of the target. Because hash fingerprint is merely a binary vector and the modified particle filter uses only a small number of particles, our tracker has a low computation cost. By conducting experiments on eight challenging video sequences from a public benchmark, we demonstrate that our tracker outperforms eight state-of-the-art trackers in terms of both accuracy and speed.

Deng, S., Huang, L., Xu, G., Wu, X. & Wu, Z. 2017, 'On Deep Learning for Trust-Aware Recommendations in Social Networks', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1164-1177.
View/Download from: Publisher's site
View description>>

With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user's trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users' interests and their trusted friends' interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.

Ding, C. & Tao, D. 2017, 'Pose-invariant face recognition with homography-based normalization', Pattern Recognition, vol. 66, pp. 144-152.
View/Download from: Publisher's site
View description>>

© 2016 Elsevier LtdPose-invariant face recognition (PIFR) refers to the ability that recognizes face images with arbitrary pose variations. Among existing PIFR algorithms, pose normalization has been proved to be an effective approach which preserves texture fidelity, but usually depends on precise 3D face models or at high computational cost. In this paper, we propose an highly efficient PIFR algorithm that effectively handles the main challenges caused by pose variation. First, a dense grid of 3D facial landmarks are projected to each 2D face image, which enables feature extraction in an pose adaptive manner. Second, for the local patch around each landmark, an optimal warp is estimated based on homography to correct texture deformation caused by pose variations. The reconstructed frontal-view patches are then utilized for face recognition with traditional face descriptors. The homography-based normalization is highly efficient and the synthesized frontal face images are of high quality. Finally, we propose an effective approach for occlusion detection, which enables face recognition with visible patches only. Therefore, the proposed algorithm effectively handles the main challenges in PIFR. Experimental results on four popular face databases demonstrate that the propose approach performs well on both constrained and unconstrained environments.

Dong, Y., Du, B., Zhang, L., Zhang, L. & Tao, D. 2017, 'LAM3L: Locally adaptive maximum margin metric learning for visual data classification', Neurocomputing, vol. 235, pp. 1-9.
View/Download from: Publisher's site
View description>>

© 2016.Visual data classification, which is aimed at determining a unique label for each class, is an increasingly important issue in the machine learning community. In recent years, increasing attention has been paid to the application of metric learning for classification, which has been proven to be a good way to obtain a promising performance. However, as a result of the limited training samples and data with complex distributions, the vast majority of these algorithms usually fail to perform well. This has motivated us to develop a novel locally adaptive maximum margin metric learning (LAM3L) algorithm in order to maximally separate similar and dissimilar classes, based on the changes between the distances before and after the maximum margin metric learning. The experimental results on two widely used UCI datasets and a real hyperspectral dataset demonstrate that the proposed method outperforms the state-of-the-art metric learning methods.

Du, B., Wang, Z., Zhang, L., Zhang, L. & Tao, D. 2017, 'Robust and Discriminative Labeling for Multi-Label Active Learning Based on Maximum Correntropy Criterion', IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1694-1707.
View/Download from: Publisher's site
View description>>

© 1992-2012 IEEE.Multi-label learning draws great interests in many real world applications. It is a highly costly task to assign many labels by the oracle for one instance. Meanwhile, it is also hard to build a good model without diagnosing discriminative labels. Can we reduce the label costs and improve the ability to train a good model for multi-label learning simultaneously? Active learning addresses the less training samples problem by querying the most valuable samples to achieve a better performance with little costs. In multi-label active learning, some researches have been done for querying the relevant labels with less training samples or querying all labels without diagnosing the discriminative information. They all cannot effectively handle the outlier labels for the measurement of uncertainty. Since maximum correntropy criterion (MCC) provides a robust analysis for outliers in many machine learning and data mining algorithms, in this paper, we derive a robust multi-label active learning algorithm based on an MCC by merging uncertainty and representativeness, and propose an efficient alternating optimization method to solve it. With MCC, our method can eliminate the influence of outlier labels that are not discriminative to measure the uncertainty. To make further improvement on the ability of information measurement, we merge uncertainty and representativeness with the prediction labels of unknown data. It cannot only enhance the uncertainty but also improve the similarity measurement of multi-label data with labels information. Experiments on benchmark multi-label data sets have shown a superior performance than the state-of-the-art methods.

Du, B., Wang, Z., Zhang, L., Zhang, L., Liu, W., Shen, J. & Tao, D. 2017, 'Exploring Representativeness and Informativeness for Active Learning', IEEE Transactions on Cybernetics, vol. 47, no. 1, pp. 14-26.
View/Download from: Publisher's site
View description>>

How can we find a general way to choose the most suitable samples for training a classifier? Even with very limited prior information? Active learning, which can be regarded as an iterative optimization procedure, plays a key role to construct a refined training set to improve the classification performance in a variety of applications, such as text analysis, image recognition, social network modeling, etc. Although combining representativeness and informativeness of samples has been proven promising for active sampling, state-of-the-art methods perform well under certain data structures. Then can we find a way to fuse the two active sampling criteria without any assumption on data? This paper proposes a general active learning framework that effectively fuses the two criteria. Inspired by a two-sample discrepancy problem, triple measures are elaborately designed to guarantee that the query samples not only possess the representativeness of the unlabeled data but also reveal the diversity of the labeled data. Any appropriate similarity measure can be employed to construct the triple measures. Meanwhile, an uncertain measure is leveraged to generate the informativeness criterion, which can be carried out in different ways. Rooted in this framework, a practical active learning algorithm is proposed, which exploits a radial basis function together with the estimated probabilities to construct the triple measures and a modified best-versus-second-best strategy to construct the uncertain measure, respectively. Experimental results on benchmark datasets demonstrate that our algorithm consistently achieves superior performance over the state-of-the-art active learning algorithms.

Du, B., Xiong, W., Wu, J., Zhang, L., Zhang, L. & Tao, D. 2017, 'Stacked Convolutional Denoising Auto-Encoders for Feature Representation', IEEE Transactions on Cybernetics, vol. 47, no. 4, pp. 1017-1027.
View/Download from: Publisher's site
View description>>

Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

Du, B., Zhang, M., Zhang, L., Hu, R. & Tao, D. 2017, 'PLTD: Patch-Based Low-Rank Tensor Decomposition for Hyperspectral Images', IEEE Transactions on Multimedia, vol. 19, no. 1, pp. 67-79.
View/Download from: Publisher's site
View description>>

© 1999-2012 IEEE.Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.

Edwards, D., Cheng, M., Wong, A., Zhang, J. & Wu, Q. 2017, 'Ambassadors of Knowledge Sharing: Co-produced Travel Information Through Tourist-Local Social Media Exchange', International Journal of Contemporary Hospitality Management, vol. 29, no. 2, pp. 690-708.
View/Download from: UTS OPUS or Publisher's site
View description>>

Purpose: The aim of this study is to understand the knowledge sharing structure and co-production of trip-related knowledge through online travel forums. Design/methodology/approach: The travel forum threads were collected from TripAdvisor Sydney travel forum for the period from 2010 to 2014, which contains 115,847 threads from 8,346 conversations. The data analytical technique was based on a novel methodological approach - visual analytics including semantic pattern generation and network analysis. Findings: Findings indicate that the knowledge structure is created by community residents who camouflage as local experts, serve as ambassadors of a destination. The knowledge structure presents collective intelligence co-produced by community residents and tourists. Further findings reveal how these community residents associate with each other and form a knowledge repertoire with information covering various travel domain areas. Practical implications: The study offers valuable insights to help destination management organizations and tour operators identify existing and emerging tourism issues to achieve a competitive destination advantage. Originality/value: This study highlights the process of social media mediated travel knowledge co-production. It also discovers how community residents engage in reaching out to tourists by camouflaging as ordinary users.

Fan, X., Xu, R.Y.D., Cao, L. & Song, Y. 2017, 'Learning Nonparametric Relational Models by Conjugately Incorporating Node Information in a Network', IEEE Transactions on Cybernetics, vol. 47, no. 3, pp. 589-599.
View/Download from: Publisher's site
View description>>

Relational model learning is useful for numerous practical applications. Many algorithms have been proposed in recent years to tackle this important yet challenging problem. Existing algorithms utilize only binary directional link data to recover hidden network structures. However, there exists far richer and more meaningful information in other parts of a network which one can (and should) exploit. The attributes associated with each node, for instance, contain crucial information to help practitioners understand the underlying relationships in a network. For this reason, in this paper, we propose two models and their solutions, namely the node-information involved mixed-membership model and the node-information involved latent-feature model, in an effort to systematically incorporate additional node information. To effectively achieve this aim, node information is used to generate individual sticks of a stick-breaking process. In this way, not only can we avoid the need to prespecify the number of communities beforehand, the algorithm also encourages that nodes exhibiting similar information have a higher chance of assigning the same community membership. Substantial efforts have been made toward achieving the appropriateness and efficiency of these models, including the use of conjugate priors. We evaluate our framework and its inference algorithms using real-world data sets, which show the generality and effectiveness of our models in capturing implicit network structures.

Fang, M., Yin, J., Hall, L.O. & Tao, D. 2017, 'Active Multitask Learning With Trace Norm Regularization Based on Excess Risk', IEEE Transactions on Cybernetics.
View/Download from: Publisher's site
View description>>

This paper addresses the problem of active learning on multiple tasks, where labeled data are expensive to obtain for each individual task but the learning problems share some commonalities across multiple related tasks. To leverage the benefits of jointly learning from multiple related tasks and making active queries, we propose a novel active multitask learning approach based on trace norm regularized least squares. The basic idea is to induce an optimal classifier which has the lowest risk and at the same time which is closest to the true hypothesis. Toward this aim, we devise a new active selection criterion that takes into account not only the risk but also the excess risk, which measures the distance to the true hypothesis. Based on this criterion, our proposed algorithm actively selects the instance to query for its label based on the combination of the two risks. Experiments on both synthetic and real-world datasets show that our proposed algorithm provides superior performance as compared to other state-of-the-art active learning methods.

Feng, X.I.A.N.G., Wan, W., Richard Yi Da Xu, Chen, H., Li, P. & Sánchez, J.A. 2017, 'A perceptual quality metric for 3D triangle meshes based on spatial pooling', Frontiers of Computer Science.
View/Download from: UTS OPUS

Ferguson, C., Inglis, S.C., Newton, P.J., Middleton, S., Macdonald, P.S. & Davidson, P.M. 2017, 'Barriers and enablers to adherence to anticoagulation in heart failure with atrial fibrillation: patient and provider perspectives'.
View/Download from: UTS OPUS or Publisher's site
View description>>

Aims & Objectives The purpose of this study was to elucidate the barriers and enablers to adherence to anticoagulation in individuals with chronic heart failure (CHF) with concomitant atrial fibrillation (AF) from the perspective of patients and providers. Background CHF and AF commonly coexist and are associated with increased stroke risk and mortality. Oral anticoagulation significantly reduces stroke risk and improves outcomes. Yet, in approximately 30% of cases anticoagulation is not commenced for a variety of reasons. Design Qualitative study using narrative inquiry. Methods Data from face to face individual interviews with patients and information retrieved from healthcare file note review documented the clinician perspective. This study is a synthesis of the two data sources, obtained during patient clinical assessments as part of the Atrial Fibrillation And Stroke Thromboprophylaxis in hEart failuRe (AFASTER) Study. Results Patient choice and preference were important factors in anticoagulation decisions, including treatment burden, unfavourable or intolerable side effects and patient refusal. Financial barriers included cost of travel, medication cost and reimbursement. Psychological factors included psychiatric illness, cognitive impairment and depression. Social barriers included homelessness and the absence of a caregiver or lack of caregiver assistance. Clinician reticence included fear of falls, frailty, age, fear of bleeding and the challenges of multi-morbidity. Facilitators to successful prescription and adherence were caregiver support, reminders and routine, self-testing and the use of technology. Conclusions Many barriers remain to high risk individuals being prescribed anticoagulation for stroke prevention. There are a number of enabling factors that facilitate prescription and optimize treatment adherence. Nurses should challenge these treatment barriers and seek enabling factors to optimise therapy. Relevance to clinical practice Nurs...

Ferguson, C., Inglis, S.C., Newton, P.J., Middleton, S., Macdonald, P.S. & Davidson, P.M. 2017, 'Multi-morbidity, frailty and self-care: important considerations in treatment with anticoagulation drugs. Outcomes of the AFASTER study.', European journal of cardiovascular nursing : journal of the Working Group on Cardiovascular Nursing of the European Society of Cardiology, vol. 16, no. 2, pp. 113-124.
View/Download from: UTS OPUS or Publisher's site
View description>>

Chronic heart failure (CHF) and atrial fibrillation (AF) are complex cardiogeriatric syndromes mediated by physical, psychological and social factors. Thromboprophylaxis is an important part of avoiding adverse events in these syndromes, particularly stroke.This study sought to describe the clinical characteristics of a cohort of patients admitted to hospital with CHF and concomitant AF and to document the rate and type of thromboprophylaxis. We examined the practice patterns of the prescription of treatment and determined the predictors of adverse events.Prospective consecutive participants with CHF and concomitant AF were enrolled during the period April to October 2013. Outcomes were assessed at 12 months, including all-cause readmission to hospital and mortality, stroke or transient ischaemic attack, and bleeding.All-cause readmission to hospital was frequent (68%) and the 12-month all-cause mortality was high (29%). The prescription of anticoagulant drugs at discharge was statistically significantly associated with a lower mortality at 12 months (23 vs. 40%; p=0.037; hazards ratio 0.506; 95% confidence interval 0.267-0.956), but was not associated with lower rates of readmission to hospital among patients with CHF and AF. Sixty-six per cent of participants were prescribed anticoagulant drugs on discharge from hospital. Self-reported self-care behaviour and 'not for cardiopulmonary resuscitation' were associated with not receiving anticoagulant drugs at discharge. Although statistical significance was not achieved, those patients who were assessed as frail or having greater comorbidity were less likely to receive anticoagulant drugs at discharge.This study highlights multi-morbidity, frailty and self-care to be important considerations in thromboprophylaxis. Shared decision-making with patients and caregivers offers the potential to improve treatment knowledge, adherence and outcomes in this group of patients with complex care needs.

Forber, J., Carter, B., DiGiacomo, M., Davidson, P.M. & Jackson, D. 2017, 'Future proofing undergraduate nurse clinical education: continuing the dialogue', Journal of Nursing Management.
View/Download from: UTS OPUS

Gholizadeh, L., Ali Khan, S., Vahedi, F. & Davidson, P.M. 2017, 'Sensitivity and specificity of Urdu version of the PHQ-9 to screen depression in patients with coronary artery disease.', Contemp Nurse, vol. 53, no. 1, pp. 75-81.
View/Download from: Publisher's site
View description>>

BACKGROUND: The Patient Health Questionnaire (PHQ-9) possesses many characteristics of a good screening tool and has the capacity to be used for screening depression in patients with coronary artery disease (CAD). AIM: To examine the psychometric properties and criterion validity of the PHQ-9 to screen and detect depression in patients with CAD in Pakistan. DESIGN: In this validation study, 150 patients with CAD completed the Urdu version of the PHQ-9. The major depressive episode module of the Mini International Neuropsychiatric Interview (MINI) was used as the gold standard. RESULTS: The Urdu version of the PHQ-9 revealed a good internal consistency with Cronbach's alpha of 0.83. Optimal sensitivity (76%) and specificity (76%) were achieved using the cut-off score of PHQ-9 ≥6, with area under the ROC curve of 0.86. CONCLUSION: The Urdu version of the PHQ-9 has acceptable psychometric properties to screen and detect major depression in patients with CAD.

Ghosh, S., Li, J., Cao, L. & Ramamohanarao, K. 2017, 'Septic shock prediction for ICU patients via coupled HMM walking on sequential contrast patterns.', J Biomed Inform, vol. 66, pp. 19-31.
View/Download from: UTS OPUS or Publisher's site
View description>>

BACKGROUND AND OBJECTIVE: Critical care patient events like sepsis or septic shock in intensive care units (ICUs) are dangerous complications which can cause multiple organ failures and eventual death. Preventive prediction of such events will allow clinicians to stage effective interventions for averting these critical complications. METHODS: It is widely understood that physiological conditions of patients on variables such as blood pressure and heart rate are suggestive to gradual changes over a certain period of time, prior to the occurrence of a septic shock. This work investigates the performance of a novel machine learning approach for the early prediction of septic shock. The approach combines highly informative sequential patterns extracted from multiple physiological variables and captures the interactions among these patterns via coupled hidden Markov models (CHMM). In particular, the patterns are extracted from three non-invasive waveform measurements: the mean arterial pressure levels, the heart rates and respiratory rates of septic shock patients from a large clinical ICU dataset called MIMIC-II. EVALUATION AND RESULTS: For baseline estimations, SVM and HMM models on the continuous time series data for the given patients, using MAP (mean arterial pressure), HR (heart rate), and RR (respiratory rate) are employed. Single channel patterns based HMM (SCP-HMM) and multi-channel patterns based coupled HMM (MCP-HMM) are compared against baseline models using 5-fold cross validation accuracies over multiple rounds. Particularly, the results of MCP-HMM are statistically significant having a p-value of 0.0014, in comparison to baseline models. Our experiments demonstrate a strong competitive accuracy in the prediction of septic shock, especially when the interactions between the multiple variables are coupled by the learning model. CONCLUSIONS: It can be concluded that the novelty of the approach, stems from the integration of sequence-based physiological pa...

Gong, C., Liu, T., Tang, Y., Yang, J., Yang, J. & Tao, D. 2017, 'A Regularization Approach for Instance-Based Superset Label Learning.', IEEE Trans Cybern.
View/Download from: Publisher's site
View description>>

Different from the traditional supervised learning in which each training example has only one explicit label, superset label learning (SLL) refers to the problem that a training example can be associated with a set of candidate labels, and only one of them is correct. Existing SLL methods are either regularization-based or instance-based, and the latter of which has achieved state-of-the-art performance. This is because the latest instance-based methods contain an explicit disambiguation operation that accurately picks up the groundtruth label of each training example from its ambiguous candidate labels. However, such disambiguation operation does not fully consider the mutually exclusive relationship among different candidate labels, so the disambiguated labels are usually generated in a nondiscriminative way, which is unfavorable for the instance-based methods to obtain satisfactory performance. To address this defect, we develop a novel regularization approach for instance-based superset label (RegISL) learning so that our instance-based method also inherits the good discriminative ability possessed by the regularization scheme. Specifically, we employ a graph to represent the training set, and require the examples that are adjacent on the graph to obtain similar labels. More importantly, a discrimination term is proposed to enlarge the gap of values between possible labels and unlikely labels for every training example. As a result, the intrinsic constraints among different candidate labels are deployed, and the disambiguated labels generated by RegISL are more discriminative and accurate than those output by existing instance-based algorithms. The experimental results on various tasks convincingly demonstrate the superiority of our RegISL to other typical SLL methods in terms of both training accuracy and test accuracy.

Gong, C., Tao, D., Liu, W., Liu, L. & Yang, J. 2017, 'Label Propagation via Teaching-to-Learn and Learning-to-Teach.', IEEE transactions on neural networks and learning systems.
View description>>

How to propagate label information from labeled examples to unlabeled examples over a graph has been intensively studied for a long time. Existing graph-based propagation algorithms usually treat unlabeled examples equally, and transmit seed labels to the unlabeled examples that are connected to the labeled examples in a neighborhood graph. However, such a popular propagation scheme is very likely to yield inaccurate propagation, because it falls short of tackling ambiguous but critical data points (e.g., outliers). To this end, this paper treats the unlabeled examples in different levels of difficulties by assessing their reliability and discriminability, and explicitly optimizes the propagation quality by manipulating the propagation sequence to move from simple to difficult examples. In particular, we propose a novel iterative label propagation algorithm in which each propagation alternates between two paradigms, teaching-to-learn and learning-to-teach (TLLT). In the teaching-to-learn step, the learner conducts the propagation on the simplest unlabeled examples designated by the teacher. In the learning-to-teach step, the teacher incorporates the learner's feedback to adjust the choice of the subsequent simplest examples. The proposed TLLT strategy critically improves the accuracy of label propagation, making our algorithm substantially robust to the values of tuning parameters, such as the Gaussian kernel width used in graph construction. The merits of our algorithm are theoretically justified and empirically demonstrated through experiments performed on both synthetic and real-world data sets.

Green, A., Luckett, T., DiGiacomo, M., Abbott, P., Delaney, P., Delaney, J. & Davidson, P.M. 2017, 'An asset-informed approach to service development', Nurse Researcher.
View/Download from: UTS OPUS

Gu, K., Tao, D., Qiao, J.F. & Lin, W. 2017, 'Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data', IEEE Transactions on Neural Networks and Learning Systems.
View/Download from: Publisher's site
View description>>

In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.

Guo, D., Xu, J., Zhang, J., Xu, M., Cui, Y. & He, X. 2017, 'User relationship strength modeling for friend recommendation on Instagram', Neurocomputing, vol. 239, pp. 9-18.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2017 Elsevier B.V.Social strength modeling in the social media community has attracted increasing research interest. Different from Flickr, which has been explored by many researchers, Instagram is more popular for mobile users and is conducive to likes and comments but seldom investigated. On Instagram, a user can post photos/videos, follow other users, comment and like other users' posts. These actions generate diverse forms of data that result in multiple user relationship views. In this paper, we propose a new framework to discover the underlying social relationship strength. User relationship learning under multiple views and the relationship strength modeling are coupled into one process framework. In addition, given the learned relationship strength, a coarse-to-fine method is proposed for friend recommendation. Experiments on friend recommendations for Instagram are presented to show the effectiveness and efficiency of the proposed framework. As exhibited by our experimental results, it can obtain better performance over other related methods. Although our method has been proposed for Instagram, it can be easily extended to any other social media communities.

He, X. & Shi, L. 2017, 'Index Portfolio and Welfare Analysis Under Heterogeneous Beliefs', Journal of Banking and Finance, vol. 75, pp. 64-79.
View/Download from: Publisher's site

He, X. & Treich, N. 2017, 'Prediction market prices under risk aversion and heterogeneous beliefs', Journal of Mathematical Economics, vol. 70, pp. 105-114.
View/Download from: Publisher's site

He, X.Z., Lütkebohmert, E. & Xiao, Y. 2017, 'Rollover Risk and Credit Risk under Time-varying Margin', Quantitative Finance, vol. 17, no. 3, pp. 455-469.
View/Download from: Publisher's site
View description>>

For a firm financed by a mixture of collateralized (short-term) debt and uncollateralized (long-term) debt, we show that fluctuations in margin requirements, reflecting funding liquidity shocks, lead to increasing the firm’s default risk and credit spreads. The severity with which a firm is hit by increasing margin requirements highly depends on both its financing structure and debt maturity structure. Our results imply that an additional premium should be added when evaluating debt in order to account for rollover risks, especially for short-matured bonds. In terms of policy implications, our results strongly indicate that regulators should intervene fast to curtail margins in crisis periods and maintain a reasonably low margin level in order to effectively prevent creditors’ run on debt.

Ho-Le, T.P., Center, J.R., Eisman, J.A., Nguyen, H.T. & Nguyen, T.V. 2017, 'Prediction of Bone Mineral Density and Fragility Fracture by Genetic Profiling.', Journal of Bone and Mineral Research, vol. 32, no. 2, pp. 285-293.
View/Download from: UTS OPUS or Publisher's site
View description>>

Although the susceptibility to fracture is partly determined by genetic factors, the contribution of newly discovered genetic variants to fracture prediction is still unclear. This study sought to define the predictive value of a genetic profiling for fracture prediction.Sixty-two bone mineral density (BMD)-associated single-nucleotide polymorphism (SNP) were genotyped in 557 men and 902 women who had participated in the Dubbo Osteoporosis Epidemiology Study. The incidence of fragility fracture was ascertained from X-ray reports between 1990 and 2015. Femoral neck BMD was measured by dual-energy X-ray absorptiometry. A weighted polygenic risk score (GRS) was created as a function of the number of risk alleles and their BMD-associated regression coefficients for each SNP. The association between GRS and fracture risk was assessed by the Cox's proportional hazards model.Individuals with greater GRS had lower femoral neck BMD (P < 0.01), but the variation in GRS accounted for less than 2% of total variance in BMD. Each unit increase in GRS was associated with a hazard ratio of 1.20 (95%CI, 1.04-1.38) for fracture, and this association was independent of age, prior fracture, fall, and in a subset of 33 SNPs, independent of femoral neck BMD. The significant association between GRS and fracture was observed for the vertebral and wrist fractures, but not for hip fracture. The area under the receiver operating characteristic (ROC) curve for the model with GRS and clinical risk factors was 0.71 (95% CI, 0.68-0.74). With GRS, the correct reclassification of fracture vs non-fracture ranged from 12% for hip fracture to 23% for wrist fracture.A genetic profiling of BMD-associated genetic variants could improve the accuracy of fracture prediction over and above that of clinical risk factors alone, and help stratify individuals by fracture status. This article is protected by copyright. All rights reserved.

Huang, T., Huang, M.L., Nguyen, Q., Zhao, L., Huang, W. & Chen, J. 2017, 'A Space-Filling Multidimensional Visualization (SFMDVis) for Exploratory Data Analysis', Information Sciences, vol. 390, pp. 32-53.
View/Download from: UTS OPUS or Publisher's site
View description>>

The space-filling visualization model was first invented by Ben Shneiderman [28] for maximizing the utilization of display space in relational data (or graph) visualization, especially for tree visualization. It uses the concept of Enclosure which dismisses the “edges” in the graphic representation that are all too frequently used in traditional node-link based graph visualizations. Therefore, the major issue in graph visualization which is the edge crossing can be naturally solved through the adoption of a space filling approach. However in the past, the space-filling concept has not attracted much attention from researchers in the field of multidimensional visualization. Although the problem of ‘edge crossing’ has also occurred among polylines which are used as the basic visual elements in the parallel coordinates visualization, it is problematic if those ‘edge crossings’ among polylines are not evenly distributed on the display plate as visual clutter will occur. This problem could significantly reduce the human readability in terms of reviewing a particular region of the visualization. In this study, we propose a new Space-Filling Multidimensional Data Visualization (SFMDVis) that for the first-time introduces a space-filling approach into multidimensional data visualization. The main contributions are: (1) achieving the maximization of space utilization in multidimensional visualization (i.e. 100% of the display area is fully used), (2) eliminating visual clutter in SFMDVis through the use of the non-classic geometric primitive and (3) improving the quality of visualization for the visual perception of linear correlations among different variables as well as recognizing data patterns. To evaluate the quality of SFMDVis, we have conducted a usability study to measure the performance of SFMDVis in comparison with parallel coordinates and a scatterplot matrix for finding linear correlations and data patterns. The evaluation results have suggested that the acc...

Hunt, L., Frost, S.A., Newton, P.J., Salamonson, Y. & Davidson, P.M. 2017, 'A survey of critical care nurses’ knowledge of intra-abdominal hypertension and absdominal compartment syndrome', Australian Critical Care, vol. 30, no. 1, pp. 21-27.
View/Download from: Publisher's site
View description>>

Background Intra-abdominal hypertension and abdominal compartment syndrome are potentially life threatening conditions. Critical care nurses need to understand the factors that predispose patients to intra-abdominal hypertension (IAH) and abdominal compartment syndrome (ACS). Predicting and managing IAH and ACS are important to improve health outcomes. Aim The aim of this paper was to (1) assess the knowledge of Australian critical care nurses about current IAH and ACS practice guidelines, measurement techniques, predictors for the development of IAH and ACS and (2) identify barriers in recognizing IAH, ACS and measuring IAP. Methods Between October 2014 and April 2015 86 registered nurses employed in the area of critical care were recruited via the form to participate in an on-line, 19-item questionnaire. The survey was distributed to critical care nurses via the Australian College of Critical Care Nurses (ACCCN) mailing list and directly to intensive care units via The majority of participants were women (n = 62) all participants were registered nurses employed in critical care the response rate was 3.2%. The study design was used to establish demographic data, employment data, and individuals’ knowledge related to IAH and ACS. Participants had the option to write hand written responses in addition to selecting a closed question response. Results The results showed that most survey participants were able to identify some obvious causes of IAH. However, less than 20% were able to recognize less apparent indices of risk. A lack of education related to IAP monitoring was identified by nearly half (44.2%) of respondents as the primary barrier to monitoring IAP.

Jan, M., Nanda, P., Usman, M. & He, X. 2017, 'PAWN: A Payload-based mutual Authentication scheme for Wireless Sensor Networks', Concurrency and Computation: Practice and Experience.
View/Download from: UTS OPUS

Jan, M.A., Nanda, He, X.S. & Liu, R.P. 2017, 'A Sybil Attack Detection Scheme for a Forest Wildfire Monitoring Application', Future Generation Computer Systems: the international journal of grid computing: theory, methods and applications.
View/Download from: UTS OPUS or Publisher's site
View description>>

Wireless Sensor Networks (WSNs) have experienced phenomenal growth over the past decade. They are typically deployed in human-inaccessible terrains to monitor and collect time-critical and delay-sensitive events. There have been several studies on the use of WSN in different applications. All such studies have mainly focused on Quality of Service (QoS) parameters such as delay, loss, jitter, etc. of the sensed data. Security provisioning is also an important and challenging task lacking in all previous studies. In this paper, we propose a Sybil attack detection scheme for a cluster-based hierarchical network mainly deployed to monitor forest wildfire. We propose a two-tier detection scheme. Initially, Sybil nodes and their forged identities are detected by high-energy nodes. However, if one or more identities of a Sybil node sneak through the detection process, they are ultimately detected by the two base stations. After Sybil attack detection, an optimal percentage of cluster heads are elected and each one is informed using nomination packets. Each nomination packet contains the identity of an elected cluster head and an end user’s specific query for data collection within a cluster. These queries are user-centric, on-demand and adaptive to an end user requirement. The undetected identities of Sybil nodes reside in one or more clusters. Their goal is to transmit high false-negative alerts to an end user for diverting attention to those geographical regions which are less vulnerable to a wildfire. Our proposed approach has better network lifetime due to efficient sleep–awake scheduling, higher detection rate and low false-negative rate.

Li, X., Lu, Q., Dong, Y. & Tao, D. 2017, 'SCE: A Manifold Regularized Set-Covering Method for Data Partitioning', IEEE Transactions on Neural Networks and Learning Systems.
View/Download from: Publisher's site
View description>>

Cluster analysis plays a very important role in data analysis. In these years, cluster ensemble, as a cluster analysis tool, has drawn much attention for its robustness, stability, and accuracy. Many efforts have been done to combine different initial clustering results into a single clustering solution with better performance. However, they neglect the structure information of the raw data in performing the cluster ensemble. In this paper, we propose a Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as a set-covering problem. In particular, we construct a Laplacian regularized objective function to capture the structure information among clusters. Moreover, considering the importance of the discriminative information underlying in the initial clustering results, we add a discriminative constraint into our proposed objective function. Finally, we verify the performance of the SCE algorithm on both synthetic and real data sets. The experimental results show the effectiveness of our proposed method SCE algorithm.

Ling, S.H., San, P.P., Lam, H.K. & Nguyen, H.T. 2017, 'Hypoglycemia detection: multiple regression-based combinational neural logic approach', Soft Computing, vol. 21, no. 2, pp. 543-553.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2015 Springer-Verlag Berlin Heidelberg Hypoglycemia is a common and serious side effect of type 1 diabetes. We measure physiological parameters continuously to provide detection of hypoglycemic episodes in type 1 diabetes mellitus patients using a multiple regression-based combinational neural logic approach. In this work, a neural logic network with multiple regression is applied to the development of non-invasive hypoglycemia monitoring system. It is an alarm system which measures the physiological parameters of electrocardiogram signal (heart rate and corrected QT interval) and determine the onset of hypoglycemia by the use of proposed hybrid neural logic approach. In this clinical application, a combinational neural logic network with multiple regression is systematically designed to hypoglycemia detection based on the characteristic of this application. To optimize the parameter of the hybrid combinational neural logic system, hybrid particle swarm optimization with wavelet mutation is applied to tuned the parameters of the system. To illustrate the effectiveness of the proposed method, hypoglycemia monitoring system which will be practically analyzed using real data sets collected from 15 children ((Formula presented.) years) with type 1 diabetes at the Department of Health, Government of Western Australia. With the use of proposed method, the best testing sensitivity of 79.07 % and specificity of 53.64 % were obtained.

Liu, Q., Deng, J., Yang, J., Liu, G. & Tao, D. 2017, 'Adaptive cascade regression model for robust face alignment', IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 797-807.
View/Download from: Publisher's site
View description>>

© 1992-2012 IEEE.Cascade regression is a popular face alignment approach, and it has achieved good performances on the wild databases. However, it depends heavily on local features in estimating reliable landmark locations and therefore suffers from corrupted images, such as images with occlusion, which often exists in real-world face images. In this paper, we present a new adaptive cascade regression model for robust face alignment. In each iteration, the shape-indexed appearance is introduced to estimate the occlusion level of each landmark, and each landmark is then weighted according to its estimated occlusion level. Also, the occlusion levels of the landmarks act as adaptive weights on the shape-indexed features to decrease the noise on the shape-indexed features. At the same time, an exemplar-based shape prior is designed to suppress the influence of local image corruption. Extensive experiments are conducted on the challenging benchmarks, and the experimental results demonstrate that the proposed method achieves better results than the state-of-the-art methods for facial landmark localization and occlusion detection.

Liu, Q., Sun, Y., Wang, C., Liu, T. & Tao, D. 2017, 'Elastic net hypergraph learning for image clustering and semi-supervised classification', IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 452-463.
View/Download from: Publisher's site
View description>>

© 1992-2012 IEEE.Graph model is emerging as a very effective tool for learning the complex structures and relationships hidden in data. In general, the critical purpose of graph-oriented learning algorithms is to construct an informative graph for image clustering and classification tasks. In addition to the classical K -nearest-neighbor and r-neighborhood methods for graph construction, l1-graph and its variants are emerging methods for finding the neighboring samples of a center datum, where the corresponding ingoing edge weights are simultaneously derived by the sparse reconstruction coefficients of the remaining samples. However, the pairwise links of l1-graph are not capable of capturing the high-order relationships between the center datum and its prominent data in sparse reconstruction. Meanwhile, from the perspective of variable selection, the l1 norm sparse constraint, regarded as a LASSO model, tends to select only one datum from a group of data that are highly correlated and ignore the others. To simultaneously cope with these drawbacks, we propose a new elastic net hypergraph learning model, which consists of two steps. In the first step, the robust matrix elastic net model is constructed to find the canonically related samples in a somewhat greedy way, achieving the grouping effect by adding the l2 penalty to the l1 constraint. In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge. Subsequently, hypergraph Laplacian matrix is constructed for further analysis. New hypergraph learning algorithms, including unsupervised clustering and multi-class semi-supervised classification, are then derived. Extensive experiments on face and handwriting databases demonstrate the effectiveness of the proposed method.

Liu, T., Gong, M. & Tao, D. 2017, 'Large-Cone Nonnegative Matrix Factorization', IEEE Transactions on Neural Networks and Learning Systems.
View/Download from: Publisher's site
View description>>

Nonnegative matrix factorization (NMF) has been greatly popularized by its parts-based interpretation and the effective multiplicative updating rule for searching local solutions. In this paper, we study the problem of how to obtain an attractive local solution for NMF, which not only fits the given training data well but also generalizes well on the unseen test data. Based on the geometric interpretation of NMF, we introduce two large-cone penalties for NMF and propose large-cone NMF (LCNMF) algorithms. Compared with NMF, LCNMF will obtain bases comprising a larger simplicial cone, and therefore has three advantages. 1) the empirical reconstruction error of LCNMF could mostly be smaller; (2) the generalization ability of the proposed algorithm is much more powerful; and (3) the obtained bases of LCNMF have a low-overlapping property, which enables the bases to be sparse and makes the proposed algorithms very robust. Experiments on synthetic and real-world data sets confirm the efficiency of LCNMF.

Liu, T., Tao, D., Song, M. & Maybank, S. 2017, 'Algorithm-Dependent Generalization Bounds for Multi-Task Learning.', IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 2, pp. 227-241.
View/Download from: Publisher's site
View description>>

Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1=n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1=T ), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.

Liu, W., Chen, X., Yang, J. & Wu, Q. 2017, 'Robust Color Guided Depth Map Restoration', IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 315-327.
View/Download from: Publisher's site
View description>>

One of the most challenging issues in color guided depth map restoration is the inconsistency between color edges in guidance color images and depth discontinuities on depth maps. This makes the restored depth map suffer from texture copy artifacts and blurring depth discontinuities. To handle this problem, most state-of-the-art methods design complex guidance weight based on guidance color images and heuristically make use of the bicubic interpolation of the input depth map. In this paper, we show that using bicubic interpolated depth map can blur depth discontinuities when the upsampling factor is large and the input depth map contains large holes and heavy noise. In contrast, we propose a robust optimization framework for color guided depth map restoration. By adopting a robust penalty function to model the smoothness term of our model, we show that the proposed method is robust against the inconsistency between color edges and depth discontinuities even when we use simple guidance weight. To the best of our knowledge, we are the first to solve this problem with a principled mathematical formulation rather than previous heuristic weighting schemes. The proposed robust method performs well in suppressing texture copy artifacts. Moreover, it can better preserve sharp depth discontinuities than previous heuristic weighting schemes. Through comprehensive experiments on both simulated data and real data, we show promising performance of the proposed method

Liu, W., Chen, X., Yang, J. & Wu, Q. 2017, 'Variable Bandwidth Weighting for Texture Copy Artifacts Suppression in Guided Depth Upsampling', IEEE Transactions on Circuits and Systems for Video Technology.
View/Download from: Publisher's site

Lu, J., Xuan, J., Zhang, G., Xu, Y.D. & Luo, X. 2017, 'Bayesian Nonparametric Relational Topic Model through Dependent Gamma Processes', IEEE Transactions on Knowledge and Data Engineering, pp. 1-14.
View/Download from: UTS OPUS or Publisher's site

Lv, L., Fan, S., Huang, M., Huang, W. & Yang, G. 2017, 'Golden Rectangle Treemap', Journal of Physics: Conference Series, vol. 787, no. 1, pp. 1-6.
View/Download from: UTS OPUS or Publisher's site
View description>>

Treemaps, a visualization method of representing hierarchical data sets, are becoming more and more popular for its efficient and compact displays. Several algorithms have been proposed to create more useful display by controlling the aspect ratios of the rectangles that make up a treemap. In this paper, we introduce a new treemap algorithm, generating layout in which the rectangles are easier to select and hierarchy information is easier to obtain. This algorithm generates rectangles which approximate golden rectangles. To prove the effectiveness of our algorithm, at the end of this paper several analyses on golden rectangle treemap have been done on disk file system.

Maneze, D., Ramjan, L., DiGiacomo, M., Everett, B., Davidson, P.M. & Salamonson, Y. 2017, 'Negotiating health and chronic illness in Filipino-Australians: A qualitative study with implications for health promotion', Ethnicity and Health.

Meng, X., Cao, L., Zhang, X. & Shao, J. 2017, 'Top-k coupled keyword recommendation for relational keyword queries', Knowledge and Information Systems, vol. 50, no. 3, pp. 883-916.
View/Download from: Publisher's site
View description>>

© 2016 Springer-Verlag LondonProviding top-k typical relevant keyword queries would benefit the users who cannot formulate appropriate queries to express their imprecise query intentions. By extracting the semantic relationships both between keywords and keyword queries, this paper proposes a new keyword query suggestion approach which can provide typical and semantically related queries to the given query. Firstly, a keyword coupling relationship measure, which considers both intra- and inter-couplings between each pair of keywords, is proposed. Then, the semantic similarity of different keyword queries can be measured by using a semantic matrix, in which the coupling relationships between keywords in queries are reserved. Based on the query semantic similarities, we next propose an approximation algorithm to find the most typical queries from query history by using the probability density estimation method. Lastly, a threshold-based top-k query selection method is proposed to expeditiously evaluate the top-k typical relevant queries. We demonstrate that our keyword coupling relationship and query semantic similarity measures can capture the coupling relationships between keywords and semantic similarities between keyword queries accurately. The efficiency of query typicality analysis and top-k query selection algorithm is also demonstrated.

Pan, S., Wu, J., Zhu, X., Long, G. & Zhang, C. 2017, 'Boosting for graph classification with universum', Knowledge and Information Systems, vol. 50, no. 1, pp. 53-77.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2016 Springer-Verlag London Recent years have witnessed extensive studies of graph classification due to the rapid increase in applications involving structural data and complex relationships. To support graph classification, all existing methods require that training graphs should be relevant (or belong) to the target class, but cannot integrate graphs irrelevant to the class of interest into the learning process. In this paper, we study a new universum graph classification framework which leverages additional “non-example” graphs to help improve the graph classification accuracy. We argue that although universum graphs do not belong to the target class, they may contain meaningful structure patterns to help enrich the feature space for graph representation and classification. To support universum graph classification, we propose a mathematical programming algorithm, ugBoost, which integrates discriminative subgraph selection and margin maximization into a unified framework to fully exploit the universum. Because informative subgraph exploration in a universum setting requires the search of a large space, we derive an upper bound discriminative score for each subgraph and employ a branch-and-bound scheme to prune the search space. By using the explored subgraphs, our graph classification model intends to maximize the margin between positive and negative graphs and minimize the loss on the universum graph examples simultaneously. The subgraph exploration and the learning are integrated and performed iteratively so that each can be beneficial to the other. Experimental results and comparisons on real-world dataset demonstrate the performance of our algorithm.

Pan, S., Wu, J., Zhu, X., Long, G. & Zhang, C. 2017, 'Task Sensitive Feature Exploration and Learning for Multitask Graph Classification', IEEE Transactions on Cybernetics, vol. 47, no. 3, pp. 744-758.
View/Download from: Publisher's site
View description>>

Multitask learning (MTL) is commonly used for jointly optimizing multiple learning tasks. To date, all existing MTL methods have been designed for tasks with feature-vector represented instances, but cannot be applied to structure data, such as graphs. More importantly, when carrying out MTL, existing methods mainly focus on exploring overall commonality or disparity between tasks for learning, but cannot explicitly capture task relationships in the feature space, so they are unable to answer important questions, such as what exactly is shared between tasks and what is the uniqueness of one task differing from others? In this paper, we formulate a new multitask graph learning problem, and propose a task sensitive feature exploration and learning algorithm for multitask graph classification. Because graphs do not have features available, we advocate a task sensitive feature exploration and learning paradigm to jointly discover discriminative subgraph features across different tasks. In addition, a feature learning process is carried out to categorize each subgraph feature into one of three categories: 1) common feature; 2) task auxiliary feature; and 3) task specific feature, indicating whether the feature is shared by all tasks, by a subset of tasks, or by only one specific task, respectively. The feature learning and the multiple task learning are iteratively optimized to form a multitask graph classification model with a global optimization goal. Experiments on real-world functional brain analysis and chemical compound categorization demonstrate the algorithm's performance. Results confirm that our method can be used to explicitly capture task correlations and uniqueness in the feature space, and explicitly answer what are shared between tasks and what is the uniqueness of a specific task.

Qiao, M., Liu, L., Yu, J., Xu, C. & Tao, D. 2017, 'Diversified dictionaries for multi-instance learning', Pattern Recognition, vol. 64, pp. 407-416.
View/Download from: Publisher's site
View description>>

© 2016 Elsevier LtdMultiple-instance learning (MIL) has been a popular topic in the study of pattern recognition for years due to its usefulness for such tasks as drug activity prediction and image/text classification. In a typical MIL setting, a bag contains a bag-level label and more than one instance/pattern. How to bridge instance-level representations to bag-level labels is a key step to achieve satisfactory classification accuracy results. In this paper, we present a supervised learning method, diversified dictionaries MIL, to address this problem. Our approach, on the one hand, exploits bag-level label information for training class-specific dictionaries. On the other hand, it introduces a diversity regularizer into the class-specific dictionaries to avoid ambiguity between them. To the best of our knowledge, this is the first time that the diversity prior is introduced to solve the MIL problems. Experiments conducted on several benchmark (drug activity and image/text annotation) datasets show that the proposed method compares favorably to state-of-the-art methods.

Rao, A., Newton, P., DiGiacomo, M., Hickman, L., Hwang, C. & Davidson, P. 2017, 'Optimal gender specific strategies for the secondary prevention of cardiovascular disease in women: a systematic review', Journal of Cardiopulmonary Rehabilitation and Prevention.

Ren, J., Song, J., Ellis, J. & Li, J. 2017, 'Staged heterogeneity learning to identify conformational B-cell epitopes from antigen sequences.', BMC Genomics, vol. 18, no. Suppl 2, p. 113.
View/Download from: Publisher's site
View description>>

BACKGROUND: The broad heterogeneity of antigen-antibody interactions brings tremendous challenges to the design of a widely applicable learning algorithm to identify conformational B-cell epitopes. Besides the intrinsic heterogeneity introduced by diverse species, extra heterogeneity can also be introduced by various data sources, adding another layer of complexity and further confounding the research. RESULTS: This work proposed a staged heterogeneity learning method, which learns both characteristics and heterogeneity of data in a phased manner. The method was applied to identify antigenic residues of heterogenous conformational B-cell epitopes based on antigen sequences. In the first stage, the model learns the general epitope patterns of each kind of propensity from a large data set containing computationally defined epitopes. In the second stage, the model learns the heterogenous complementarity of these propensities from a relatively small guided data set containing experimentally determined epitopes. Moreover, we designed an algorithm to cluster the predicted individual antigenic residues into conformational B-cell epitopes so as to provide strong potential for real-world applications, such as vaccine development. With heterogeneity well learnt, the transferability of the prediction model was remarkably improved to handle new data with a high level of heterogeneity. The model has been tested on two data sets with experimentally determined epitopes, and on a data set with computationally defined epitopes. This proposed sequence-based method achieved outstanding performance - about twice that of existing methods, including the sequence-based predictor CBTOPE and three other structure-based predictors. CONCLUSIONS: The proposed method uses only antigen sequence information, and thus has much broader applications.

Rihari-Thomas, J., DiGiacomo, M., Phillips, J., Newton, P. & Davidson, P.M. 2017, 'Clinician Perspectives of Barriers to Effective Implementation of a Rapid Response System in an Academic Health Centre: A Focus Group Study', Int J Health Policy Manag, vol. 6, no. x, pp. 1-10.
View/Download from: UTS OPUS or Publisher's site
View description>>

Background: Systemic and structural issues of rapid response system (RRS) models can hinder implementation. This study sought to understand the ways in which acute care clinicians (physicians and nurses) experience and negotiate care for deteriorating patients within the RRS. Methods: Physicians and nurses working within an Australian academic health centre within a jurisdictional-based model of clinical governance participated in focus group interviews. Verbatim transcripts were analysed using thematic content analysis. Results: Thirty-four participants (21 physicians and 13 registered nurses [RNs]) participated in six focus groups over five weeks in 2014. Implementing the RRS in daily practice was a process of informal communication and negotiation in spite of standardised protocols. Themes highlighted several systems or organisational-level barriers to an effective RRS, including (1) responsibility is inversely proportional to clinical experience; (2) actions around system flexibility contribute to deviation from protocol; (3) misdistribution of resources leads to perceptions of inadequate staffing levels inhibiting full optimisation of the RRS; and (4) poor communication and documentation of RRS increases clinician workloads. Conclusion: Implementing a RRS is complex and multifactorial, influenced by various inter- and intra-professional factors, staffing models and organisational culture. The RRS is not a static model; it is both reflexive and iterative, perpetually transforming to meet healthcare consumer and provider demands and local unit contexts and needs. Requiring more than just a strong initial implementation phase, new models of care such as a RRS demand good governance processes, ongoing support and regular evaluation and refinement. Cultural, organizational and professional factors, as well as systems-based processes, require consideration if RRSs are to achieve their intended outcomes in dynamic healthcare settings.

Smith, T.A., Agar, M., Jenkins, C.R., Ingham, J.M. & Davidson, P.M. 2017, 'Experience of acute noninvasive ventilation-insights from 'Behind the Mask': a qualitative study.', BMJ supportive & palliative care.
View/Download from: UTS OPUS or Publisher's site
View description>>

Non-invasive ventilation (NIV) is widely used in the management of acute and acute-on-chronic respiratory failure. Understanding the experiences of patients treated with NIV is critical to person-centred care. We describe the subjective experiences of individuals treated with NIV for acute hypercapnic respiratory failure.Qualitative face-to-face interviews analysed using thematic analysis.Australian tertiary teaching hospital.Individuals with acute hypercapnic respiratory failure treated with NIV outside the intensive care unit. Individuals who did not speak English or were unable or unwilling to consent were excluded.13 participants were interviewed. Thematic saturation was achieved. Participants described NIV providing substantial relief from symptoms and causing discomfort. They described enduring NIV to facilitate another chance at life. Although participants sometimes appeared passive, others expressed a strong conviction that they knew which behaviours and treatments relieved their distress. Most participants described gaps in their recollection of acute hospitalisation and placed a great amount of trust in healthcare providers. All participants indicated that they would accept NIV in the future, if clinically indicated, and often expressed a sense of compulsion to accept NIV. Participants' description of their experience of NIV was intertwined with their experience of chronic disease.Participants described balancing the benefits and burdens of NIV, with the goal of achieving another chance at life. Gaps in recall of their treatment with NIV were frequent, potentially suggesting underlying delirium. The findings of this study inform patient-centred care, have implications for the care of patients requiring NIV and for advance care planning discussions.

Stoianoff, N.P. & Walpole, M. 2017, 'Tax and the environment: an evaluation framework for tax policy reform - group Delphi study', Australian Tax Forum: a journal of taxation policy, law and reform, vol. 31, pp. 693-716.

Tian, D. & Tao, D. 2017, 'Global Hashing System for Fast Image Search', IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 49-89.
View/Download from: Publisher's site
View description>>

© 1992-2012 IEEE.Hashing methods have been widely investigated for fast approximate nearest neighbor searching in large data sets. Most existing methods use binary vectors in lower dimensional spaces to represent data points that are usually real vectors of higher dimensionality. We divide the hashing process into two steps. Data points are first embedded in a low-dimensional space, and the global positioning system method is subsequently introduced but modified for binary embedding. We devise dataindependent and data-dependent methods to distribute the satellites at appropriate locations. Our methods are based on finding the tradeoff between the information losses in these two steps. Experiments show that our data-dependent method outperforms other methods in different-sized data sets from 100k to 10M. By incorporating the orthogonality of the code matrix, both our data-independent and data-dependent methods are particularly impressive in experiments on longer bits.

Usman, M., Jan, M.A. & He, X.S. 2017, 'Cryptography-Based Secure Data Storage and Sharing Using HEVC and Public Clouds', Information Sciences, vol. 387, pp. 90-102.
View/Download from: UTS OPUS or Publisher's site
View description>>

Mobile devices are widely used for uploading/downloading media files such as audio, video and images to/from the remote servers. These devices have limited resources and are required to offload resource-consuming media processing tasks to the clouds for further processing. Migration of these tasks means that the media services provided by the clouds need to be authentic and trusted by the mobile users. The existing schemes for secure exchange of media files between the mobile devices and the clouds have limitations in terms of memory support, processing load, battery power, and data size. These schemes lack the support for large-sized video files and are not suitable for resource-constrained mobile devices. This paper proposes a secure, lightweight, robust and efficient scheme for data exchange between the mobile users and the media clouds. The proposed scheme considers High Efficiency Video Coding (HEVC) Intra-encoded video streams in unsliced mode as a source for data hiding. Our proposed scheme aims to support real-time processing with power-saving constraint in mind. Advanced Encryption Standard (AES) is used as a base encryption technique by our proposed scheme. The simulation results clearly show that the proposed scheme outperforms AES-256 by decreasing the processing time up to 4.76% and increasing the data size up to 0.72% approximately. The proposed scheme can readily be applied to real-time cloud media streaming.

Walczak, A., Butow, P.N., Tattersall, M.H.N., Davidson, P.M., Young, J., Epstein, R.M., Costa, D.S.J. & Clayton, J.M. 2017, 'Encouraging early discussion of life expectancy and end-of-life care: A randomised controlled trial of a nurse-led communication support program for patients and caregivers', International Journal of Nursing Studies, vol. 67, pp. 31-40.
View/Download from: Publisher's site
View description>>

© 2016 Elsevier LtdBackground Patients are often not given the information needed to understand their prognosis and make informed treatment choices, with many consequently experiencing less than optimal care and quality-of-life at end-of-life. Objectives To evaluate the efficacy of a nurse-facilitated communication support program for patients with advanced, incurable cancer to assist them in discussing prognosis and end-of-life care. Design A parallel-group randomised controlled trial design was used. Settings This trial was conducted at six cancer treatment centres affiliated with major hospitals in Sydney, Australia. Participants 110 patients with advanced, incurable cancer participated. Methods The communication support program included guided exploration of a question prompt list, communication challenges, patient values and concerns and the value of discussing end-of-life care early, with oncologists cued to endorse question-asking and question prompt list use. Patients were randomised after baseline measure completion, a regular oncology consultation was audio-recorded and a follow-up questionnaire was completed one month later. Communication, health-related quality-of-life and satisfaction measures and a manualised consultation-coding scheme were used. Descriptive, Mixed Modelling and Generalised Linear Mixed Modelling analyses were conducted using SPSS version 22. Results Communication support program recipients gave significantly more cues for discussion of prognosis, end-of-life care, future care options and general issues not targeted by the intervention during recorded consultations, but did not ask more questions about these issues or overall. Oncologists’ question prompt list and question asking endorsement was inconsistent. Communication support program recipients’ self-efficacy in knowing what questions to ask their doctor significantly improved at follow-up while control arm patients’ self-efficacy declined. The communication support program did...

Wang, H., Wu, J., Pan, S., Zhang, P. & Chen, L. 2017, 'Towards large-scale social networks with online diffusion provenance detection', Computer Networks, vol. 114, pp. 154-166.
View/Download from: Publisher's site
View description>>

© 2016 Elsevier B.V.In this paper we study a new problem of online discovering diffusion provenances in large networks. Existing work on network diffusion provenance identification focuses on offline learning where data collected from network detectors are static and a snapshot of the network is available before learning. However, an offline learning model does not meet the need for early warning, real-time awareness, or a real-time response to malicious information spreading in networks. To this end, we propose an online regression model for real-time diffusion provenance identification. Specifically, we first use offline collected network cascades to infer the edge transmission weights, and then use an online l 1 non-convex regression model as the identification model. The proposed methods are empirically evaluated on both synthetic and real-world networks. Experimental results demonstrate the effectiveness of the proposed model.

Wang, H., Zhang, P., Zhu, X., Tsang, I.W.H., Chen, L., Zhang, C. & Wu, X. 2017, 'Incremental Subgraph Feature Selection for Graph Classification', IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 1, pp. 128-142.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Graph classification is an important tool for analyzing data with structure dependency, where subgraphs are often used as features for learning. In reality, the dimension of the subgraphs crucially depends on the threshold setting of the frequency support parameter, and the number may become extremely large. As a result, subgraphs may be incrementally discovered to form a feature stream and require the underlying graph classifier to effectively discover representative subgraph features from the subgraph feature stream. In this paper, we propose a primal-dual incremental subgraph feature selection algorithm (ISF) based on a max-margin graph classifier. The ISF algorithm constructs a sequence of solutions that are both primal and dual feasible. Each primal-dual pair shrinks the dual gap and renders a better solution for the optimal subgraph feature set. To avoid bias of ISF algorithm on short-pattern subgraph features, we present a new incremental subgraph join feature selection algorithm (ISJF) by forcing graph classifiers to join short-pattern subgraphs and generate long-pattern subgraph features. We evaluate the performance of the proposed models on both synthetic networks and real-world social network data sets. Experimental results demonstrate the effectiveness of the proposed methods.

Wang, J.J.J., Bartlett, M. & Ryan, L. 2017, 'On the impact of nonresponse in logistic regression: application to the 45 and Up study.', BMC Med Res Methodol, vol. 17, no. 1, p. 80.
View/Download from: Publisher's site
View description>>

BACKGROUND: In longitudinal studies, nonresponse to follow-up surveys poses a major threat to validity, interpretability and generalisation of results. The problem of nonresponse is further complicated by the possibility that nonresponse may depend on the outcome of interest. We identified sociodemographic, general health and wellbeing characteristics associated with nonresponse to the follow-up questionnaire and assessed the extent and effect of nonresponse on statistical inference in a large-scale population cohort study. METHODS: We obtained the data from the baseline and first wave of the follow-up survey of the 45 and Up Study. Of those who were invited to participate in the follow-up survey, 65.2% responded. Logistic regression model was used to identify baseline characteristics associated with follow-up response. A Bayesian selection model approach with sensitivity analysis was implemented to model nonignorable nonresponse. RESULTS: Characteristics associated with a higher likelihood of responding to the follow-up survey include female gender, age categories 55-74, high educational qualification, married/de facto, worked part or partially or fully retired and higher household income. Parameter estimates and conclusions are generally consistent across different assumptions on the missing data mechanism. However, we observed some sensitivity for variables that are strong predictors for both the outcome and nonresponse. CONCLUSIONS: Results indicated in the context of the binary outcome under study, nonresponse did not result in substantial bias and did not alter the interpretation of results in general. Conclusions were still largely robust under nonignorable missing data mechanism. Use of a Bayesian selection model is recommended as a useful strategy for assessing potential sensitivity of results to missing data.

Wu, J., Pan, S., Zhu, X., Zhang, C. & Wu, X. 2017, 'Positive and Unlabeled Multi-Graph Learning', IEEE Transactions on Cybernetics, vol. 47, no. 4, pp. 818-829.
View/Download from: Publisher's site
View description>>

In this paper, we advance graph classification to handle multi-graph learning for complicated objects, where each object is represented as a bag of graphs and the label is only available to each bag but not individual graphs. In addition, when training classifiers, users are only given a handful of positive bags and many unlabeled bags, and the learning objective is to train models to classify previously unseen graph bags with maximum accuracy. To achieve the goal, we propose a positive and unlabeled multi-graph learning (puMGL) framework to first select informative subgraphs to convert graphs into a feature space. To utilize unlabeled bags for learning, puMGL assigns a confidence weight to each bag and dynamically adjusts its weight value to select “reliable negative bags.” A number of representative graphs, selected from positive bags and identified reliable negative graph bags, form a “margin graph pool” which serves as the base for deriving subgraph patterns, training graph classifiers, and further updating the bag weight values. A closed-loop iterative process helps discover optimal subgraphs from positive and unlabeled graph bags for learning. Experimental comparisons demonstrate the performance of puMGL for classifying real-world complicated objects.

Xiong, W., Zhang, L., Du, B. & Tao, D. 2017, 'Combining local and global: Rich and robust feature pooling for visual recognition', Pattern Recognition, vol. 62, pp. 225-235.
View/Download from: Publisher's site
View description>>

© 2016 Elsevier LtdThe human visual system proves expert in discovering patterns in both global and local feature space. Can we design a similar way for unsupervised feature learning? In this paper, we propose a novel spatial pooling method within an unsupervised feature learning framework, named Rich and Robust Feature Pooling (R2FP), to better extract rich and robust representation from sparse feature maps learned from the raw data. Both local and global pooling strategies are further considered to instantiate such a method. The former selects the most representative features in the sub-region and summarizes the joint distribution of the selected features, while the latter is utilized to extract multiple resolutions of features and fuse the features with a feature balance kernel for rich representation. Extensive experiments on several image recognition tasks demonstrate the superiority of the proposed method.

Xu, Z., Tao, D., Huang, S. & Zhang, Y. 2017, 'Friend or Foe: Fine-Grained Categorization with Weak Supervision', IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 135-146.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Multi-instance learning (MIL) is widely acknowledged as a fundamental method to solve weakly supervised problems. While MIL is usually effective in standard weakly supervised object recognition tasks, in this paper, we investigate the applicability of MIL on an extreme case of weakly supervised learning on the task of fine-grained visual categorization, in which intra-class variance could be larger than inter-class due to the subtle differences between subordinate categories. For this challenging task, we propose a new method that generalizes the standard multi-instance learning framework, for which a novel multi-task co-localization algorithm is proposed to take advantage of the relationship among fine-grained categories and meanwhile performs as an effective initialization strategy for the non-convex multi-instance objective. The localization results also enable object-level domain-specific fine-tuning of deep neural networks, which significantly boosts the performance. Experimental results on three fine-grained datasets reveal the effectiveness of the proposed method, especially the importance of exploiting inter-class relationships between object categories in weakly supervised fine-grained recognition.

Yao, Y., Zhang, J., Shen, F., Hua, X., Xu, J. & Tang, Z. 2017, 'A new web-supervised method for image dataset constructions', Neurocomputing, vol. 236, pp. 23-31.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2017.The goal of this work is to automatically collect a large number of highly relevant natural images from Internet for given queries. A novel automatic image dataset construction framework is proposed by employing multiple query expansions. In specific, the given queries are first expanded by searching in the Google Books Ngrams Corpora to obtain a richer semantic descriptions, from which the visually non-salient and less relevant expansions are then filtered. After retrieving images from the Internet with filtered expansions, we further filter noisy images by clustering and progressively Convolutional Neural Networks (CNN) based methods. To evaluate the performance of our proposed method for image dataset construction, we build an image dataset with 10 categories. We then run object detections on our image dataset with three other image datasets which were constructed by weak supervised, web supervised and full supervised learning, the experimental results indicated the effectiveness of our method is superior to weak supervised and web supervised state-of-the-art methods. In addition, we do a cross-dataset classification to evaluate the performance of our dataset with two publically available manual labelled dataset STL-10 and CIFAR-10.

Yu, J., Yang, X., Gao, F. & Tao, D. 2017, 'Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking', IEEE Transactions on Cybernetics.
View/Download from: Publisher's site
View description>>

How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

Zeng, K., Yu, J., Wang, R., Li, C. & Tao, D. 2017, 'Coupled Deep Autoencoder for Single Image Super-Resolution', IEEE Transactions on Cybernetics, vol. 47, no. 1, pp. 27-37.
View/Download from: Publisher's site
View description>>

Sparse coding has been widely applied to learning-based single image super-resolution (SR) and has obtained promising performance by jointly learning effective representations for low-resolution (LR) and high-resolution (HR) image patch pairs. However, the resulting HR images often suffer from ringing, jaggy, and blurring artifacts due to the strong yet ad hoc assumptions that the LR image patch representation is equal to, is linear with, lies on a manifold similar to, or has the same support set as the corresponding HR image patch representation. Motivated by the success of deep learning, we develop a data-driven model coupled deep autoencoder (CDA) for single image SR. CDA is based on a new deep architecture and has high representational capability. CDA simultaneously learns the intrinsic representations of LR and HR image patches and a big-data-driven function that precisely maps these LR representations to their corresponding HR representations. Extensive experimentation demonstrates the superior effectiveness and efficiency of CDA for single image SR compared to other state-of-the-art methods on Set5 and Set14 datasets.

Zhang, K., Tao, D., Gao, X., Li, X. & Li, J. 2017, 'Coarse-to-Fine Learning for Single-Image Super-Resolution', IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1109-1122.
View/Download from: Publisher's site
View description>>

This paper develops a coarse-to-fine framework for single-image super-resolution (SR) reconstruction. The coarse-to-fine approach achieves high-quality SR recovery based on the complementary properties of both example learning- and reconstruction-based algorithms: example learning-based SR approaches are useful for generating plausible details from external exemplars but poor at suppressing aliasing artifacts, while reconstruction-based SR methods are propitious for preserving sharp edges yet fail to generate fine details. In the coarse stage of the method, we use a set of simple yet effective mapping functions, learned via correlative neighbor regression of grouped low-resolution (LR) to high-resolution (HR) dictionary atoms, to synthesize an initial SR estimate with particularly low computational cost. In the fine stage, we devise an effective regularization term that seamlessly integrates the properties of local structural regularity, nonlocal self-similarity, and collaborative representation over relevant atoms in a learned HR dictionary, to further improve the visual quality of the initial SR estimation obtained in the coarse stage. The experimental results indicate that our method outperforms other state-of-the-art methods for producing high-quality images despite that both the initial SR estimation and the followed enhancement are cheap to implement.

Zhang, S., Lan, X., Yao, H., Zhou, H., Tao, D. & Li, X. 2017, 'A Biologically Inspired Appearance Model for Robust Visual Tracking', IEEE Transactions on Neural Networks and Learning Systems.
View/Download from: Publisher's site
View description>>

In this paper, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding, and pooling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.

Zhang, T., Jia, W., He, X.S. & Yang, J. 2017, 'Discriminative Dictionary Learning with Motion Weber Local Descriptor for Violence Detection', IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 3, pp. 696-709.
View/Download from: UTS OPUS or Publisher's site
View description>>

Automatic violence detection from video is a hot topic for many video surveillance applications. However, there has been little success in developing an algorithm that can detect violence in surveillance videos with high performance. In this paper, following our recently proposed idea of motion Weber local descriptor (WLD), we make two major improvements and propose a more effective and efficient algorithm for detecting violence from motion images. First, we propose an improved WLD (IWLD) to better depict low-level image appearance information, and then extend the spatial descriptor IWLD by adding a temporal component to capture local motion information and hence form the motion IWLD. Second, we propose a modified sparse-representation-based classification model to both control the reconstruction error of coding coefficients and minimize the classification error. Based on the proposed sparse model, a class-specific dictionary containing dictionary atoms corresponding to the class labels is learned using class labels of training samples. With this learned dictionary, not only the representation residual but also the representation coefficients become discriminative. A classification scheme integrating the modified sparse model is developed to exploit such discriminative information. The experimental results on three benchmark data sets have demonstrated the superior performance of the proposed approach over the state of the arts.

Zhang, T., Jia, W., Yang, B., Yang, J., He, X. & Zheng, Z. 2017, 'MoWLD: a robust motion image descriptor for violence detection', Multimedia Tools and Applications, vol. 76, no. 1, pp. 1419-1438.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2015 Springer Science+Business Media New York Automatic violence detection from video is a hot topic for many video surveillance applications. However, there has been little success in designing an algorithm that can detect violence in surveillance videos with high performance. Existing methods typically apply the Bag-of-Words (BoW) model on local spatiotemporal descriptors. However, traditional spatiotemporal features are not discriminative enough, and also the BoW model roughly assigns each feature vector to only one visual word and therefore ignores the spatial relationships among the features. To tackle these problems, in this paper we propose a novel Motion Weber Local Descriptor (MoWLD) in the spirit of the well-known WLD and make it a powerful and robust descriptor for motion images. We extend the WLD spatial descriptions by adding a temporal component to the appearance descriptor, which implicitly captures local motion information as well as low-level image appear information. To eliminate redundant and irrelevant features, the non-parametric Kernel Density Estimation (KDE) is employed on the MoWLD descriptor. In order to obtain more discriminative features, we adopt the sparse coding and max pooling scheme to further process the selected MoWLDs. Experimental results on three benchmark datasets have demonstrated the superiority of the proposed approach over the state-of-the-arts.

Zhao, Y., Di, H., Zhang, J., Lu, Y., Lv, F. & Li, Y. 2017, 'Region-based Mixture Models for human action recognition in low-resolution videos', Neurocomputing.
View/Download from: UTS OPUS or Publisher's site
View description>>

© 2017.State-of-the-art performance in human action recognition is achieved by the use of dense trajectories which are extracted by optical flow algorithms. However, optical flow algorithms are far from perfect in low-resolution (LR) videos. In addition, the spatial and temporal layout of features is a powerful cue for action discrimination. While, most existing methods encode the layout by previously segmenting body parts which is not feasible in LR videos. Addressing the problems, we adopt the Layered Elastic Motion Tracking (LEMT) method to extract a set of long-term motion trajectories and a long-term common shape from each video sequence, where the extracted trajectories are much denser than those of sparse interest points (SIPs); then we present a hybrid feature representation to integrate both of the shape and motion features; and finally we propose a Region-based Mixture Model (RMM) to be utilized for action classification. The RMM encodes the spatial layout of features without any needs of body parts segmentation. Experimental results show that the approach is effective and, more importantly, the approach is more general for LR recognition tasks.

Zuo, Y., Wu, Q., Zhang, J. & An, P. 2017, 'Explicit Edge Inconsistency Evaluation Model for Color-guided Depth Map Enhancement', IEEE Transactions on Circuits and Systems for Video Technology.
View/Download from: UTS OPUS or Publisher's site
View description>>

Color-guided depth enhancement is to refine depth maps according to the assumption that the depth edges and the color edges at the corresponding locations are consistent. In the methods on such low-level vision task, Markov Random Fields (MRF) including its variants is one of major approaches, which has dominated this area for several years. However, the assumption above is not always true. To tackle the problem, the state-of-the-art solutions are to adjust the weighting coefficient inside the smoothness term of MRF model. These methods are lack of explicit evaluation model to quantitatively measure the inconsistency between the depth edge map and the color edge map, so it cannot adaptively control the efforts of the guidance from the color image for depth enhancement leading to various defects such as texture-copy artifacts and blurring depth edges. In this paper, we propose a quantitative measurement on such inconsistency and explicitly embed it into the smoothness term. The proposed method demonstrates the promising experimental results when compared with benchmark and the state-of-the-art methods on Middlebury datasets, ToF-Mark datasets and NYU datasets.

Conferences

Agrawal, S. & Williams, M.A. 2017, 'Robot authority and human obedience: A study of human behaviour using a robot security guard', ACM/IEEE International Conference on Human-Robot Interaction, pp. 57-58.
View/Download from: Publisher's site
View description>>

© 2017 Authors.There has been much debate, sci-fi movie scenes, and several scientific studies exploring the concept of robot authority. Some of the key research questions include: when should humans follow/question robot instructions; how can a robot increase its ability to convince humans to follow their instructions or to change their behaviour. In this paper, we describe a recent experiment designed to explore the notions of robot authority and human obedience. We set up a robot in a publicly accessible building to act as a security guard that issued instructions to specific humans. We identified and analysed the factors that affected a human's decisions to follow the robot's instruction. The four key factors were: perceived aggression, responsiveness, anthropomorphism, level of safety and intelligence in the robot's behaviour. We implemented various social cues to exhibit and convey authority and aggressiveness in the robot's behaviour. The results suggest that the degree of aggression that different people perceived in the robot's behaviour did not have a significant impact in their decision to follow the robot's instruction. Although, the people who disobeyed the robot, perceived the robot's behaviour to be more unsafe and less human-like than the people who followed the robot's instructions and also found the robot to be more responsive.

Chinchore, A., Xu, G. & Jiang, F. 2017, 'Classifying sybil in MSNs using C4.5', IEEE/ACM BESC 2016 - Proceedings of 2016 International Conference on Behavioral, Economic, Socio - Cultural Computing.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Sybil detection is an important task in cyber security research. Over past years, many data mining algorithms have been adopted to fulfill such task. Using classification and regression for sybil detection is a very challenging task. Despite of existing research made toward modeling classification for sybil detection and prediction, this research has proposed new solution on how sybil activity could be tracked to address this challenging issue. Prediction of sybil behaviour has been demonstrated by analysing the graph-based classification and regression techniques, using decision trees and described dependencies across different methods. Calculated gain and maxGain helped to trace some sybil users in the datasets.

Jiang, F., Gan, J., Xu, Y. & Xu, G. 2017, 'Coupled behavioral analysis for user preference-based email spamming', IEEE/ACM BESC 2016 - Proceedings of 2016 International Conference on Behavioral, Economic, Socio - Cultural Computing.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.In this paper, we develop and implement a new email spamming system leveraged by coupled text similarity analysis on user preference and a virtual meta-layer user-based email network, we take the social networks or campus LAN networks as the spam social network scenario. Fewer current practices exploit social networking initiatives to assist in spam filtering. Social network has essentially a large number of accounts features and attributes to be considered. Instead of considering large amount of users accounts features, we construct a new model called meta-layer email network which can reduce these features by only considering individual user's actions as an indicator of user preference, these common user actions are considered to construct a social behavior-based email network. With the further analytic results from text similarity measurements for each individual email contents, the behavior-based virtual email network can be improved with much higher accuracy on user preferences. Further, a coupled selection model is developed for this email network, we are able to consider all relevant factors/features in a whole and recommend the emails practically to the user individually. The experimental results show the new approach can achieve higher precision and accuracy with better email ranking in favor of personalised preference.

Li, Y. & Tao, D. 2017, 'Online Semi-Supervised Multi-Task Distance Metric Learning', IEEE International Conference on Data Mining Workshops, ICDMW, pp. 474-479.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Given several related tasks, multi-Task learning can improve the performance of each task through sharing parameters or feature representations. In this paper, we apply multi-Task learning to a particular case of distance metric learning, in which we have a small amount of labeled data. Consider the effectiveness of semi-supervised learning handling few labeled machine learning problems, we integrate semi-supervised learning with multi-Task learning and distance metric learning. One of the defect of multi-Task learning is its low training efficiency, as we need all the training examples from all tasks to train a model. We propose an online learning algorithm to overcome this drawback of multi-Task learning. Experiments are conducted on one landmark multi-Task learning dataset to demonstrate the efficiency and effectiveness of our online semi-supervised multi-Task learning algorithm.

Li, Y., Tian, X. & Tao, D. 2017, 'Regularized large margin distance metric learning', Proceedings - IEEE International Conference on Data Mining, ICDM, pp. 1015-1022.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Distance metric learning plays an important role in many applications, such as classification and clustering. In this paper, we propose a novel distance metric learning using two hinge losses in the objective function. One is the constraint of the pairs which makes the similar pairs (the same label) closer and the dissimilar (different labels) pairs separated as far as possible. The other one is the constraint of the triplets which makes the largest distance between pairs intra the class larger than the smallest distance between pairs inter the classes. Previous works only consider one of the two kinds of constraints. Additionally, different from the triplets used in previous works, we just need a small amount of such special triplets. This improves the efficiency of our proposed method. Consider the situation in which we might not have enough labeled samples, we extend the proposed distance metric learning into a semi-supervised learning framework. Experiments are conducted on several landmark datasets and the results demonstrate the effectiveness of our proposed method.

Ojha, S. & Williams, M.-.A. 2017, 'Emotional Appraisal : A Computational Perspective', Fifth Annual Conference on Advances in Cognitive Systems, Troy, USA.
View/Download from: UTS OPUS
View description>>

Research on computational modelling of emotions has received significant attention in the last few decades. As such, several computational models of emotions have been proposed which have provided an unprecedented insight into the implications of the emotion theories emerging from cognitive psychology studies. Yet the existing computational models of emotion have distinct limitations namely:(i) low replicability - difficult to implement the given computational model by reading the description of the model, (ii) domain dependence - model only applicable in one or more predefined scenarios or domains, (iii) low scalability and integrability - difficult to use the system in larger or different domains and difficult to integrate the model in wide range of other intelligent systems. In this paper, we propose a completely domain-independent mathematical representation for computational modelling of emotion that provides better replicability and integrability. The implementation of our model is inspired by appraisal theory - an emotion theory which assumes that emotions result from the cognitive evaluation of a situation.

Zhu, X. & Xu, G. 2017, 'Applying Visual Analytics on Traditional Data Mining Process: Quick Prototype, Simple Expertise Transformation, and Better Interpretation', Proceedings - 4th International Conference on Enterprise Systems: Advances in Enterprise Systems, ES 2016, pp. 208-213.
View/Download from: Publisher's site
View description>>

© 2016 IEEE.Due to a lack of experience, business might not be confident about the completeness of their proposed data mining (DM) project objectives at early stage. Besides, business domain expertise usually shrinks when delivered to data analysts. This expertise ought to contribute more throughout whole project. In addition, the outcome from DM project might fail to transform into actionable advice as the interpretation for the outcome is hard to understand and, as a result, unconvincing to apply in real. To fill the above three gaps, Visual Analytics (VA) tools are applied in different stages to optimize traditional data analytics process. In my practice, VA tools have offered both an easy access to generate quick insights for evaluating project objective's viability, and a bidirectional channel between data analysts and stakeholders to break the background barrier. Consequently, more applicable outcomes and better client satisfaction are gained.

Other

Aliyev, N. & He, X. 2017, 'Ambiguous market making', SSRN.

He, X., Li, K. & Shi, L. 2017, 'Social interactions, stochastic volatility, and momentum', SSRN.